Test Report: Hyper-V_Windows 18634

                    
                      743ee2f6c19b1c9aeee0e19f36a4d6af542f1699:2024-04-15:34041
                    
                

Test fail (21/208)

x
+
TestAddons/parallel/Registry (74.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 25.1096ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-fhvrv" [c16cb697-f687-4d6e-a843-78d29be15574] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0145959s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-8s68r" [a017a726-3edd-4a71-a68d-edcc93eb94e3] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0168502s
addons_test.go:340: (dbg) Run:  kubectl --context addons-961400 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-961400 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-961400 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.1328771s)
addons_test.go:359: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-961400 ip
addons_test.go:359: (dbg) Done: out/minikube-windows-amd64.exe -p addons-961400 ip: (2.9862839s)
addons_test.go:364: expected stderr to be -empty- but got: *"W0415 17:47:09.075808   13852 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-961400 ip"
2024/04/15 17:47:11 [DEBUG] GET http://172.19.57.138:5000
addons_test.go:388: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-961400 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-windows-amd64.exe -p addons-961400 addons disable registry --alsologtostderr -v=1: (16.8181489s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-961400 -n addons-961400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-961400 -n addons-961400: (13.8652983s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-961400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-961400 logs -n 25: (10.3414414s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-994200 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:38 UTC |                     |
	|         | -p download-only-994200                                                                     |                      |                   |                |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |                   |                |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |                |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |                |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:39 UTC | 15 Apr 24 17:39 UTC |
	| delete  | -p download-only-994200                                                                     | download-only-994200 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:39 UTC | 15 Apr 24 17:39 UTC |
	| start   | -o=json --download-only                                                                     | download-only-472100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:39 UTC |                     |
	|         | -p download-only-472100                                                                     |                      |                   |                |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                                                                |                      |                   |                |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |                |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |                |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:39 UTC | 15 Apr 24 17:39 UTC |
	| delete  | -p download-only-472100                                                                     | download-only-472100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:39 UTC | 15 Apr 24 17:39 UTC |
	| start   | -o=json --download-only                                                                     | download-only-230500 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:39 UTC |                     |
	|         | -p download-only-230500                                                                     |                      |                   |                |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                                                           |                      |                   |                |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |                |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |                |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:39 UTC | 15 Apr 24 17:39 UTC |
	| delete  | -p download-only-230500                                                                     | download-only-230500 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:39 UTC | 15 Apr 24 17:39 UTC |
	| delete  | -p download-only-994200                                                                     | download-only-994200 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:39 UTC | 15 Apr 24 17:39 UTC |
	| delete  | -p download-only-472100                                                                     | download-only-472100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:39 UTC | 15 Apr 24 17:39 UTC |
	| delete  | -p download-only-230500                                                                     | download-only-230500 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:39 UTC | 15 Apr 24 17:39 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-605800 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:39 UTC |                     |
	|         | binary-mirror-605800                                                                        |                      |                   |                |                     |                     |
	|         | --alsologtostderr                                                                           |                      |                   |                |                     |                     |
	|         | --binary-mirror                                                                             |                      |                   |                |                     |                     |
	|         | http://127.0.0.1:50128                                                                      |                      |                   |                |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |                |                     |                     |
	| delete  | -p binary-mirror-605800                                                                     | binary-mirror-605800 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:40 UTC | 15 Apr 24 17:40 UTC |
	| addons  | enable dashboard -p                                                                         | addons-961400        | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:40 UTC |                     |
	|         | addons-961400                                                                               |                      |                   |                |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-961400        | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:40 UTC |                     |
	|         | addons-961400                                                                               |                      |                   |                |                     |                     |
	| start   | -p addons-961400 --wait=true                                                                | addons-961400        | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:40 UTC | 15 Apr 24 17:46 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |                |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |                |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |                |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |                |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |                |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |                |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |                |                     |                     |
	|         | --addons=yakd --driver=hyperv                                                               |                      |                   |                |                     |                     |
	|         | --addons=ingress                                                                            |                      |                   |                |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |                |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |                   |                |                     |                     |
	| addons  | addons-961400 addons                                                                        | addons-961400        | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:46 UTC | 15 Apr 24 17:47 UTC |
	|         | disable metrics-server                                                                      |                      |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |                |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-961400        | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:46 UTC | 15 Apr 24 17:47 UTC |
	|         | addons-961400                                                                               |                      |                   |                |                     |                     |
	| ip      | addons-961400 ip                                                                            | addons-961400        | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:47 UTC | 15 Apr 24 17:47 UTC |
	| addons  | addons-961400 addons disable                                                                | addons-961400        | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:47 UTC | 15 Apr 24 17:47 UTC |
	|         | registry --alsologtostderr                                                                  |                      |                   |                |                     |                     |
	|         | -v=1                                                                                        |                      |                   |                |                     |                     |
	| addons  | addons-961400 addons disable                                                                | addons-961400        | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:47 UTC |                     |
	|         | helm-tiller --alsologtostderr                                                               |                      |                   |                |                     |                     |
	|         | -v=1                                                                                        |                      |                   |                |                     |                     |
	| ssh     | addons-961400 ssh cat                                                                       | addons-961400        | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:47 UTC |                     |
	|         | /opt/local-path-provisioner/pvc-b043ef5f-899c-46dc-bf5d-2436443ceed8_default_test-pvc/file1 |                      |                   |                |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 17:40:05
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 17:40:05.408947    1632 out.go:291] Setting OutFile to fd 808 ...
	I0415 17:40:05.409809    1632 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:40:05.409809    1632 out.go:304] Setting ErrFile to fd 640...
	I0415 17:40:05.409809    1632 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:40:05.437389    1632 out.go:298] Setting JSON to false
	I0415 17:40:05.441450    1632 start.go:129] hostinfo: {"hostname":"minikube6","uptime":14532,"bootTime":1713188273,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0415 17:40:05.441450    1632 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 17:40:05.448463    1632 out.go:177] * [addons-961400] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0415 17:40:05.455555    1632 notify.go:220] Checking for updates...
	I0415 17:40:05.457843    1632 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 17:40:05.462142    1632 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 17:40:05.467370    1632 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0415 17:40:05.471279    1632 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 17:40:05.473283    1632 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 17:40:05.476659    1632 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 17:40:11.304006    1632 out.go:177] * Using the hyperv driver based on user configuration
	I0415 17:40:11.307956    1632 start.go:297] selected driver: hyperv
	I0415 17:40:11.307956    1632 start.go:901] validating driver "hyperv" against <nil>
	I0415 17:40:11.308145    1632 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 17:40:11.362199    1632 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 17:40:11.363645    1632 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 17:40:11.363645    1632 cni.go:84] Creating CNI manager for ""
	I0415 17:40:11.363645    1632 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 17:40:11.363645    1632 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 17:40:11.363645    1632 start.go:340] cluster config:
	{Name:addons-961400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-961400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 17:40:11.363645    1632 iso.go:125] acquiring lock: {Name:mkb11aac800c033551a31c7a773c0461f92e4459 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 17:40:11.368854    1632 out.go:177] * Starting "addons-961400" primary control-plane node in "addons-961400" cluster
	I0415 17:40:11.371723    1632 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 17:40:11.371723    1632 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 17:40:11.371723    1632 cache.go:56] Caching tarball of preloaded images
	I0415 17:40:11.372437    1632 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 17:40:11.372700    1632 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 17:40:11.372883    1632 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\config.json ...
	I0415 17:40:11.372883    1632 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\config.json: {Name:mkfbd8a6261ee037edddbf047df820b5cf984466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 17:40:11.374700    1632 start.go:360] acquireMachinesLock for addons-961400: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 17:40:11.374700    1632 start.go:364] duration metric: took 0s to acquireMachinesLock for "addons-961400"
	I0415 17:40:11.374700    1632 start.go:93] Provisioning new machine with config: &{Name:addons-961400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:addons-961400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 17:40:11.374700    1632 start.go:125] createHost starting for "" (driver="hyperv")
	I0415 17:40:11.377935    1632 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0415 17:40:11.378928    1632 start.go:159] libmachine.API.Create for "addons-961400" (driver="hyperv")
	I0415 17:40:11.378928    1632 client.go:168] LocalClient.Create starting
	I0415 17:40:11.379254    1632 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0415 17:40:11.446129    1632 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0415 17:40:11.666879    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0415 17:40:13.988376    1632 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0415 17:40:13.988376    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:40:13.989005    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0415 17:40:15.875336    1632 main.go:141] libmachine: [stdout =====>] : False
	
	I0415 17:40:15.875336    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:40:15.875758    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 17:40:17.463137    1632 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 17:40:17.463137    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:40:17.464103    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 17:40:21.589144    1632 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 17:40:21.589722    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:40:21.591670    1632 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0415 17:40:22.134581    1632 main.go:141] libmachine: Creating SSH key...
	I0415 17:40:22.226058    1632 main.go:141] libmachine: Creating VM...
	I0415 17:40:22.226058    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 17:40:25.240425    1632 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 17:40:25.240650    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:40:25.240720    1632 main.go:141] libmachine: Using switch "Default Switch"
	I0415 17:40:25.240720    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 17:40:27.111613    1632 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 17:40:27.111613    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:40:27.112740    1632 main.go:141] libmachine: Creating VHD
	I0415 17:40:27.112740    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-961400\fixed.vhd' -SizeBytes 10MB -Fixed
	I0415 17:40:31.089201    1632 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-961400\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : AA41DCF1-226D-4491-8558-21D123183653
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0415 17:40:31.089366    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:40:31.089366    1632 main.go:141] libmachine: Writing magic tar header
	I0415 17:40:31.089366    1632 main.go:141] libmachine: Writing SSH key tar header
	I0415 17:40:31.099297    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-961400\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-961400\disk.vhd' -VHDType Dynamic -DeleteSource
	I0415 17:40:34.481162    1632 main.go:141] libmachine: [stdout =====>] : 
	I0415 17:40:34.482068    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:40:34.482068    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-961400\disk.vhd' -SizeBytes 20000MB
	I0415 17:40:37.297869    1632 main.go:141] libmachine: [stdout =====>] : 
	I0415 17:40:37.297869    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:40:37.298779    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-961400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-961400' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0415 17:40:41.298647    1632 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-961400 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0415 17:40:41.298647    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:40:41.298787    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-961400 -DynamicMemoryEnabled $false
	I0415 17:40:43.771696    1632 main.go:141] libmachine: [stdout =====>] : 
	I0415 17:40:43.771696    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:40:43.772575    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-961400 -Count 2
	I0415 17:40:46.124160    1632 main.go:141] libmachine: [stdout =====>] : 
	I0415 17:40:46.124160    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:40:46.124318    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-961400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-961400\boot2docker.iso'
	I0415 17:40:48.969451    1632 main.go:141] libmachine: [stdout =====>] : 
	I0415 17:40:48.969451    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:40:48.970420    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-961400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-961400\disk.vhd'
	I0415 17:40:51.876051    1632 main.go:141] libmachine: [stdout =====>] : 
	I0415 17:40:51.876561    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:40:51.876561    1632 main.go:141] libmachine: Starting VM...
	I0415 17:40:51.876650    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-961400
	I0415 17:40:55.420524    1632 main.go:141] libmachine: [stdout =====>] : 
	I0415 17:40:55.426382    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:40:55.426382    1632 main.go:141] libmachine: Waiting for host to start...
	I0415 17:40:55.426465    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:40:57.950766    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:40:57.950766    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:40:57.951274    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:41:00.657700    1632 main.go:141] libmachine: [stdout =====>] : 
	I0415 17:41:00.657700    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:41:01.664148    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:41:04.040690    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:41:04.041537    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:41:04.041537    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:41:06.756757    1632 main.go:141] libmachine: [stdout =====>] : 
	I0415 17:41:06.756802    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:41:07.763669    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:41:10.135610    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:41:10.135892    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:41:10.136078    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:41:12.856416    1632 main.go:141] libmachine: [stdout =====>] : 
	I0415 17:41:12.856416    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:41:13.857856    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:41:16.267474    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:41:16.267474    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:41:16.268469    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:41:18.960927    1632 main.go:141] libmachine: [stdout =====>] : 
	I0415 17:41:18.960927    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:41:19.962597    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:41:22.407158    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:41:22.407421    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:41:22.407421    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:41:25.197441    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:41:25.197441    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:41:25.197624    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:41:27.559367    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:41:27.559433    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:41:27.559433    1632 machine.go:94] provisionDockerMachine start ...
	I0415 17:41:27.559433    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:41:29.853768    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:41:29.854091    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:41:29.854091    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:41:32.568452    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:41:32.569527    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:41:32.576278    1632 main.go:141] libmachine: Using SSH client type: native
	I0415 17:41:32.587329    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.57.138 22 <nil> <nil>}
	I0415 17:41:32.587329    1632 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 17:41:32.721697    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0415 17:41:32.721697    1632 buildroot.go:166] provisioning hostname "addons-961400"
	I0415 17:41:32.721697    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:41:35.004088    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:41:35.004179    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:41:35.004290    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:41:37.815778    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:41:37.815778    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:41:37.822837    1632 main.go:141] libmachine: Using SSH client type: native
	I0415 17:41:37.824326    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.57.138 22 <nil> <nil>}
	I0415 17:41:37.824326    1632 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-961400 && echo "addons-961400" | sudo tee /etc/hostname
	I0415 17:41:37.981970    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-961400
	
	I0415 17:41:37.982118    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:41:40.294025    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:41:40.295095    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:41:40.295166    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:41:43.059811    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:41:43.060638    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:41:43.068132    1632 main.go:141] libmachine: Using SSH client type: native
	I0415 17:41:43.068669    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.57.138 22 <nil> <nil>}
	I0415 17:41:43.068669    1632 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-961400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-961400/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-961400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 17:41:43.229431    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 17:41:43.230078    1632 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0415 17:41:43.230177    1632 buildroot.go:174] setting up certificates
	I0415 17:41:43.230177    1632 provision.go:84] configureAuth start
	I0415 17:41:43.230260    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:41:45.522766    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:41:45.522766    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:41:45.522766    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:41:48.213737    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:41:48.213737    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:41:48.214120    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:41:50.520827    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:41:50.520827    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:41:50.521510    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:41:53.301070    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:41:53.302067    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:41:53.302117    1632 provision.go:143] copyHostCerts
	I0415 17:41:53.302646    1632 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0415 17:41:53.304140    1632 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0415 17:41:53.305706    1632 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0415 17:41:53.306855    1632 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-961400 san=[127.0.0.1 172.19.57.138 addons-961400 localhost minikube]
	I0415 17:41:53.513942    1632 provision.go:177] copyRemoteCerts
	I0415 17:41:53.529419    1632 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 17:41:53.529419    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:41:55.849382    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:41:55.849752    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:41:55.849752    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:41:58.604295    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:41:58.604295    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:41:58.606348    1632 sshutil.go:53] new ssh client: &{IP:172.19.57.138 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-961400\id_rsa Username:docker}
	I0415 17:41:58.716919    1632 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1874586s)
	I0415 17:41:58.717743    1632 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 17:41:58.766711    1632 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0415 17:41:58.817893    1632 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0415 17:41:58.867543    1632 provision.go:87] duration metric: took 15.6371183s to configureAuth
	I0415 17:41:58.867665    1632 buildroot.go:189] setting minikube options for container-runtime
	I0415 17:41:58.868444    1632 config.go:182] Loaded profile config "addons-961400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 17:41:58.868502    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:42:01.230243    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:42:01.230243    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:42:01.230963    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:42:03.945568    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:42:03.946066    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:42:03.953176    1632 main.go:141] libmachine: Using SSH client type: native
	I0415 17:42:03.953811    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.57.138 22 <nil> <nil>}
	I0415 17:42:03.953811    1632 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 17:42:04.076398    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0415 17:42:04.076594    1632 buildroot.go:70] root file system type: tmpfs
	I0415 17:42:04.076825    1632 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 17:42:04.076825    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:42:06.387297    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:42:06.387297    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:42:06.387966    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:42:09.106614    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:42:09.107829    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:42:09.114545    1632 main.go:141] libmachine: Using SSH client type: native
	I0415 17:42:09.115310    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.57.138 22 <nil> <nil>}
	I0415 17:42:09.115310    1632 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 17:42:09.274458    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 17:42:09.274458    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:42:11.594140    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:42:11.594140    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:42:11.595053    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:42:14.353324    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:42:14.353991    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:42:14.360947    1632 main.go:141] libmachine: Using SSH client type: native
	I0415 17:42:14.360947    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.57.138 22 <nil> <nil>}
	I0415 17:42:14.360947    1632 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 17:42:16.586148    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0415 17:42:16.586148    1632 machine.go:97] duration metric: took 49.0263271s to provisionDockerMachine
	I0415 17:42:16.586148    1632 client.go:171] duration metric: took 2m5.2062307s to LocalClient.Create
	I0415 17:42:16.586148    1632 start.go:167] duration metric: took 2m5.2062307s to libmachine.API.Create "addons-961400"
	I0415 17:42:16.586148    1632 start.go:293] postStartSetup for "addons-961400" (driver="hyperv")
	I0415 17:42:16.586148    1632 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 17:42:16.605779    1632 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 17:42:16.605779    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:42:18.901348    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:42:18.901348    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:42:18.902155    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:42:21.657907    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:42:21.657907    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:42:21.658649    1632 sshutil.go:53] new ssh client: &{IP:172.19.57.138 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-961400\id_rsa Username:docker}
	I0415 17:42:21.768833    1632 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1630127s)
	I0415 17:42:21.784098    1632 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 17:42:21.794597    1632 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 17:42:21.794597    1632 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0415 17:42:21.795229    1632 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0415 17:42:21.795497    1632 start.go:296] duration metric: took 5.2093082s for postStartSetup
	I0415 17:42:21.798479    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:42:24.149223    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:42:24.150106    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:42:24.150195    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:42:26.924901    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:42:26.924901    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:42:26.926203    1632 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\config.json ...
	I0415 17:42:26.933114    1632 start.go:128] duration metric: took 2m15.5572153s to createHost
	I0415 17:42:26.933114    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:42:29.217030    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:42:29.217371    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:42:29.217371    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:42:31.925361    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:42:31.925566    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:42:31.932648    1632 main.go:141] libmachine: Using SSH client type: native
	I0415 17:42:31.933452    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.57.138 22 <nil> <nil>}
	I0415 17:42:31.933452    1632 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0415 17:42:32.062983    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713202952.077448690
	
	I0415 17:42:32.062983    1632 fix.go:216] guest clock: 1713202952.077448690
	I0415 17:42:32.062983    1632 fix.go:229] Guest: 2024-04-15 17:42:32.07744869 +0000 UTC Remote: 2024-04-15 17:42:26.9331149 +0000 UTC m=+141.738582101 (delta=5.14433379s)
	I0415 17:42:32.062983    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:42:34.356857    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:42:34.356857    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:42:34.357700    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:42:37.092037    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:42:37.092037    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:42:37.100228    1632 main.go:141] libmachine: Using SSH client type: native
	I0415 17:42:37.100849    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.57.138 22 <nil> <nil>}
	I0415 17:42:37.100849    1632 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713202952
	I0415 17:42:37.249732    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 15 17:42:32 UTC 2024
	
	I0415 17:42:37.249732    1632 fix.go:236] clock set: Mon Apr 15 17:42:32 UTC 2024
	 (err=<nil>)
	I0415 17:42:37.249732    1632 start.go:83] releasing machines lock for "addons-961400", held for 2m25.8738796s
	I0415 17:42:37.250341    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:42:39.510223    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:42:39.510799    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:42:39.510853    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:42:42.398534    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:42:42.398534    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:42:42.403860    1632 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 17:42:42.404063    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:42:42.415122    1632 ssh_runner.go:195] Run: cat /version.json
	I0415 17:42:42.415122    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:42:44.924822    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:42:44.924879    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:42:44.924879    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:42:44.949740    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:42:44.949740    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:42:44.950821    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:42:47.758286    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:42:47.758317    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:42:47.758877    1632 sshutil.go:53] new ssh client: &{IP:172.19.57.138 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-961400\id_rsa Username:docker}
	I0415 17:42:47.787400    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:42:47.787400    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:42:47.788507    1632 sshutil.go:53] new ssh client: &{IP:172.19.57.138 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-961400\id_rsa Username:docker}
	I0415 17:42:47.961532    1632 ssh_runner.go:235] Completed: cat /version.json: (5.5463663s)
	I0415 17:42:47.961623    1632 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.5576275s)
	I0415 17:42:47.976637    1632 ssh_runner.go:195] Run: systemctl --version
	I0415 17:42:47.999795    1632 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0415 17:42:48.010099    1632 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 17:42:48.024588    1632 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 17:42:48.058468    1632 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 17:42:48.058468    1632 start.go:494] detecting cgroup driver to use...
	I0415 17:42:48.058468    1632 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 17:42:48.109760    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0415 17:42:48.147908    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 17:42:48.172773    1632 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 17:42:48.188362    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 17:42:48.224498    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 17:42:48.261744    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 17:42:48.300521    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 17:42:48.337283    1632 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 17:42:48.372395    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 17:42:48.409992    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 17:42:48.447832    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 17:42:48.484064    1632 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 17:42:48.521089    1632 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 17:42:48.554345    1632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 17:42:48.776141    1632 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 17:42:48.810377    1632 start.go:494] detecting cgroup driver to use...
	I0415 17:42:48.825780    1632 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0415 17:42:48.863830    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 17:42:48.903743    1632 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 17:42:48.954130    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 17:42:48.994721    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 17:42:49.035366    1632 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0415 17:42:49.108064    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 17:42:49.132013    1632 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 17:42:49.184723    1632 ssh_runner.go:195] Run: which cri-dockerd
	I0415 17:42:49.205473    1632 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0415 17:42:49.225087    1632 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0415 17:42:49.276152    1632 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0415 17:42:49.506921    1632 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0415 17:42:49.714991    1632 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0415 17:42:49.715306    1632 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0415 17:42:49.768186    1632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 17:42:49.982319    1632 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 17:42:52.530310    1632 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5474967s)
	I0415 17:42:52.545769    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0415 17:42:52.589069    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 17:42:52.628536    1632 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0415 17:42:52.844523    1632 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0415 17:42:53.077825    1632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 17:42:53.291887    1632 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0415 17:42:53.336729    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 17:42:53.378599    1632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 17:42:53.583594    1632 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0415 17:42:53.703931    1632 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0415 17:42:53.718837    1632 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0415 17:42:53.727845    1632 start.go:562] Will wait 60s for crictl version
	I0415 17:42:53.742113    1632 ssh_runner.go:195] Run: which crictl
	I0415 17:42:53.762599    1632 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0415 17:42:53.822690    1632 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0415 17:42:53.833981    1632 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 17:42:53.881136    1632 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 17:42:53.921061    1632 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0415 17:42:53.921061    1632 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0415 17:42:53.924758    1632 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0415 17:42:53.924758    1632 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0415 17:42:53.924758    1632 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0415 17:42:53.924758    1632 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:d7:0e Flags:up|broadcast|multicast|running}
	I0415 17:42:53.927751    1632 ip.go:210] interface addr: fe80::6b0:6318:bc6e:fcda/64
	I0415 17:42:53.927751    1632 ip.go:210] interface addr: 172.19.48.1/20
	I0415 17:42:53.941738    1632 ssh_runner.go:195] Run: grep 172.19.48.1	host.minikube.internal$ /etc/hosts
	I0415 17:42:53.948973    1632 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 17:42:53.973250    1632 kubeadm.go:877] updating cluster {Name:addons-961400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
9.3 ClusterName:addons-961400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.57.138 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0415 17:42:53.973667    1632 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 17:42:53.984371    1632 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 17:42:54.012465    1632 docker.go:685] Got preloaded images: 
	I0415 17:42:54.012583    1632 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0415 17:42:54.026481    1632 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0415 17:42:54.062399    1632 ssh_runner.go:195] Run: which lz4
	I0415 17:42:54.083921    1632 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0415 17:42:54.090898    1632 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0415 17:42:54.090898    1632 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0415 17:42:56.080423    1632 docker.go:649] duration metric: took 2.0115547s to copy over tarball
	I0415 17:42:56.094458    1632 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0415 17:43:02.070546    1632 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (5.9750744s)
	I0415 17:43:02.070701    1632 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0415 17:43:02.140356    1632 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0415 17:43:02.165356    1632 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0415 17:43:02.215246    1632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 17:43:02.422261    1632 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 17:43:08.186055    1632 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.7637478s)
	I0415 17:43:08.198957    1632 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 17:43:08.227094    1632 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0415 17:43:08.227284    1632 cache_images.go:84] Images are preloaded, skipping loading
	I0415 17:43:08.227340    1632 kubeadm.go:928] updating node { 172.19.57.138 8443 v1.29.3 docker true true} ...
	I0415 17:43:08.227661    1632 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-961400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.57.138
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:addons-961400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0415 17:43:08.238701    1632 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0415 17:43:08.276977    1632 cni.go:84] Creating CNI manager for ""
	I0415 17:43:08.277035    1632 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 17:43:08.277104    1632 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0415 17:43:08.277160    1632 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.57.138 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-961400 NodeName:addons-961400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.57.138"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.57.138 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0415 17:43:08.277350    1632 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.57.138
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-961400"
	  kubeletExtraArgs:
	    node-ip: 172.19.57.138
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.57.138"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0415 17:43:08.291429    1632 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0415 17:43:08.313904    1632 binaries.go:44] Found k8s binaries, skipping transfer
	I0415 17:43:08.334233    1632 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0415 17:43:08.355007    1632 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0415 17:43:08.389783    1632 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0415 17:43:08.425922    1632 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0415 17:43:08.473434    1632 ssh_runner.go:195] Run: grep 172.19.57.138	control-plane.minikube.internal$ /etc/hosts
	I0415 17:43:08.480796    1632 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.57.138	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 17:43:08.521758    1632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 17:43:08.749743    1632 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 17:43:08.784950    1632 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400 for IP: 172.19.57.138
	I0415 17:43:08.785005    1632 certs.go:194] generating shared ca certs ...
	I0415 17:43:08.785069    1632 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 17:43:08.785545    1632 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0415 17:43:09.189042    1632 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt ...
	I0415 17:43:09.189998    1632 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt: {Name:mkb0ebdce3b528a3c449211fdfbba2d86c130c96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 17:43:09.191234    1632 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key ...
	I0415 17:43:09.191234    1632 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key: {Name:mk1ec59eaa4c2f7a35370569c3fc13a80bc1499d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 17:43:09.191647    1632 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0415 17:43:09.289711    1632 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0415 17:43:09.289711    1632 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mk78efc1a7bd38719c2f7a853f9109f9a1a3252e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 17:43:09.291541    1632 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key ...
	I0415 17:43:09.291541    1632 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key: {Name:mk57de77abeaf23b535083770f5522a07b562b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 17:43:09.291791    1632 certs.go:256] generating profile certs ...
	I0415 17:43:09.292956    1632 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.key
	I0415 17:43:09.292956    1632 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt with IP's: []
	I0415 17:43:09.432636    1632 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt ...
	I0415 17:43:09.432636    1632 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: {Name:mk01cc2cbee30f802bf632c30f0ac075fb6799f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 17:43:09.433644    1632 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.key ...
	I0415 17:43:09.433644    1632 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.key: {Name:mk78a4da2b720d3beae1b3ca4e910a0622c904d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 17:43:09.434641    1632 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\apiserver.key.d9892c53
	I0415 17:43:09.435548    1632 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\apiserver.crt.d9892c53 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.57.138]
	I0415 17:43:09.587515    1632 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\apiserver.crt.d9892c53 ...
	I0415 17:43:09.587515    1632 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\apiserver.crt.d9892c53: {Name:mk1282116eb9f6207f084769cc8fe614e46c4d6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 17:43:09.589562    1632 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\apiserver.key.d9892c53 ...
	I0415 17:43:09.589562    1632 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\apiserver.key.d9892c53: {Name:mk6e6ac0ddef9ea2ff7cc9ea0975bf1974ac202e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 17:43:09.590121    1632 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\apiserver.crt.d9892c53 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\apiserver.crt
	I0415 17:43:09.601991    1632 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\apiserver.key.d9892c53 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\apiserver.key
	I0415 17:43:09.602930    1632 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\proxy-client.key
	I0415 17:43:09.602930    1632 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\proxy-client.crt with IP's: []
	I0415 17:43:09.993911    1632 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\proxy-client.crt ...
	I0415 17:43:09.993911    1632 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\proxy-client.crt: {Name:mkf62041512ce4b9af512cddb672878e16f135d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 17:43:09.995180    1632 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\proxy-client.key ...
	I0415 17:43:09.995180    1632 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\proxy-client.key: {Name:mkb9045bd359356ad1e1684a620d5cc87e71e4b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 17:43:10.007521    1632 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0415 17:43:10.007521    1632 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0415 17:43:10.008471    1632 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0415 17:43:10.008739    1632 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0415 17:43:10.009996    1632 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0415 17:43:10.064795    1632 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0415 17:43:10.117890    1632 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0415 17:43:10.170798    1632 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0415 17:43:10.221759    1632 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0415 17:43:10.276968    1632 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0415 17:43:10.321311    1632 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0415 17:43:10.371717    1632 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0415 17:43:10.424573    1632 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0415 17:43:10.473107    1632 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0415 17:43:10.520414    1632 ssh_runner.go:195] Run: openssl version
	I0415 17:43:10.546010    1632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0415 17:43:10.586176    1632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0415 17:43:10.596300    1632 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I0415 17:43:10.609851    1632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0415 17:43:10.638711    1632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0415 17:43:10.681089    1632 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 17:43:10.688929    1632 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0415 17:43:10.689258    1632 kubeadm.go:391] StartCluster: {Name:addons-961400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:addons-961400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.57.138 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 17:43:10.699692    1632 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0415 17:43:10.741959    1632 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0415 17:43:10.779735    1632 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0415 17:43:10.813275    1632 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0415 17:43:10.831245    1632 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0415 17:43:10.831245    1632 kubeadm.go:156] found existing configuration files:
	
	I0415 17:43:10.844912    1632 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0415 17:43:10.862752    1632 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0415 17:43:10.877252    1632 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0415 17:43:10.911717    1632 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0415 17:43:10.932888    1632 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0415 17:43:10.952058    1632 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0415 17:43:10.982255    1632 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0415 17:43:10.998677    1632 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0415 17:43:11.013900    1632 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0415 17:43:11.048044    1632 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0415 17:43:11.065185    1632 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0415 17:43:11.082268    1632 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0415 17:43:11.100541    1632 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0415 17:43:11.364912    1632 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0415 17:43:25.816677    1632 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0415 17:43:25.816677    1632 kubeadm.go:309] [preflight] Running pre-flight checks
	I0415 17:43:25.817308    1632 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0415 17:43:25.817454    1632 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0415 17:43:25.817757    1632 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0415 17:43:25.817977    1632 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0415 17:43:25.822375    1632 out.go:204]   - Generating certificates and keys ...
	I0415 17:43:25.822375    1632 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0415 17:43:25.822375    1632 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0415 17:43:25.822963    1632 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0415 17:43:25.823040    1632 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0415 17:43:25.823040    1632 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0415 17:43:25.823040    1632 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0415 17:43:25.823578    1632 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0415 17:43:25.823804    1632 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-961400 localhost] and IPs [172.19.57.138 127.0.0.1 ::1]
	I0415 17:43:25.823804    1632 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0415 17:43:25.823804    1632 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-961400 localhost] and IPs [172.19.57.138 127.0.0.1 ::1]
	I0415 17:43:25.823804    1632 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0415 17:43:25.823804    1632 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0415 17:43:25.824646    1632 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0415 17:43:25.824646    1632 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0415 17:43:25.824646    1632 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0415 17:43:25.824646    1632 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0415 17:43:25.824646    1632 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0415 17:43:25.824646    1632 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0415 17:43:25.825622    1632 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0415 17:43:25.825622    1632 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0415 17:43:25.825622    1632 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0415 17:43:25.828645    1632 out.go:204]   - Booting up control plane ...
	I0415 17:43:25.828645    1632 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0415 17:43:25.828645    1632 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0415 17:43:25.828645    1632 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0415 17:43:25.828645    1632 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0415 17:43:25.829631    1632 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0415 17:43:25.829631    1632 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0415 17:43:25.829631    1632 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0415 17:43:25.829631    1632 kubeadm.go:309] [apiclient] All control plane components are healthy after 8.004544 seconds
	I0415 17:43:25.829631    1632 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0415 17:43:25.830626    1632 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0415 17:43:25.830626    1632 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0415 17:43:25.830626    1632 kubeadm.go:309] [mark-control-plane] Marking the node addons-961400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0415 17:43:25.830626    1632 kubeadm.go:309] [bootstrap-token] Using token: q80tel.t6giuzfi8108gnva
	I0415 17:43:25.837891    1632 out.go:204]   - Configuring RBAC rules ...
	I0415 17:43:25.837891    1632 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0415 17:43:25.837891    1632 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0415 17:43:25.837891    1632 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0415 17:43:25.838918    1632 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0415 17:43:25.838918    1632 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0415 17:43:25.838918    1632 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0415 17:43:25.838918    1632 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0415 17:43:25.838918    1632 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0415 17:43:25.839867    1632 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0415 17:43:25.839867    1632 kubeadm.go:309] 
	I0415 17:43:25.839867    1632 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0415 17:43:25.839867    1632 kubeadm.go:309] 
	I0415 17:43:25.839867    1632 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0415 17:43:25.839867    1632 kubeadm.go:309] 
	I0415 17:43:25.839867    1632 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0415 17:43:25.839867    1632 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0415 17:43:25.839867    1632 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0415 17:43:25.839867    1632 kubeadm.go:309] 
	I0415 17:43:25.839867    1632 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0415 17:43:25.839867    1632 kubeadm.go:309] 
	I0415 17:43:25.840867    1632 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0415 17:43:25.840867    1632 kubeadm.go:309] 
	I0415 17:43:25.840867    1632 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0415 17:43:25.840867    1632 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0415 17:43:25.840867    1632 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0415 17:43:25.840867    1632 kubeadm.go:309] 
	I0415 17:43:25.840867    1632 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0415 17:43:25.841865    1632 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0415 17:43:25.841865    1632 kubeadm.go:309] 
	I0415 17:43:25.841865    1632 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token q80tel.t6giuzfi8108gnva \
	I0415 17:43:25.841865    1632 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:64433043496e1f3289bbcb278fdd01bdcea4f93c3a13feee16fa4e3db23f59c2 \
	I0415 17:43:25.841865    1632 kubeadm.go:309] 	--control-plane 
	I0415 17:43:25.841865    1632 kubeadm.go:309] 
	I0415 17:43:25.841865    1632 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0415 17:43:25.841865    1632 kubeadm.go:309] 
	I0415 17:43:25.841865    1632 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token q80tel.t6giuzfi8108gnva \
	I0415 17:43:25.842874    1632 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:64433043496e1f3289bbcb278fdd01bdcea4f93c3a13feee16fa4e3db23f59c2 
	I0415 17:43:25.842874    1632 cni.go:84] Creating CNI manager for ""
	I0415 17:43:25.842874    1632 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 17:43:25.844869    1632 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0415 17:43:25.862876    1632 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0415 17:43:25.899355    1632 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0415 17:43:25.980415    1632 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0415 17:43:25.998305    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 17:43:25.998305    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-961400 minikube.k8s.io/updated_at=2024_04_15T17_43_25_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c minikube.k8s.io/name=addons-961400 minikube.k8s.io/primary=true
	I0415 17:43:26.064266    1632 ops.go:34] apiserver oom_adj: -16
	I0415 17:43:26.241057    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 17:43:26.745455    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 17:43:27.254002    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 17:43:27.745179    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 17:43:28.247542    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 17:43:28.748709    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 17:43:29.245436    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 17:43:29.747431    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 17:43:30.250397    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 17:43:30.751831    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 17:43:31.240888    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 17:43:31.743558    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 17:43:32.247988    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 17:43:32.746896    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 17:43:33.254605    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 17:43:33.742077    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 17:43:34.245049    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 17:43:34.752682    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 17:43:35.252711    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 17:43:35.742465    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 17:43:36.247493    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 17:43:36.751090    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 17:43:37.253578    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 17:43:37.745003    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 17:43:38.247433    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 17:43:38.742041    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 17:43:38.866854    1632 kubeadm.go:1107] duration metric: took 12.8862722s to wait for elevateKubeSystemPrivileges
	W0415 17:43:38.866854    1632 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0415 17:43:38.866854    1632 kubeadm.go:393] duration metric: took 28.1773731s to StartCluster
	I0415 17:43:38.867856    1632 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 17:43:38.867856    1632 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 17:43:38.868856    1632 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 17:43:38.869880    1632 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0415 17:43:38.869880    1632 start.go:234] Will wait 6m0s for node &{Name: IP:172.19.57.138 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 17:43:38.869880    1632 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0415 17:43:38.872969    1632 out.go:177] * Verifying Kubernetes components...
	I0415 17:43:38.869880    1632 addons.go:69] Setting yakd=true in profile "addons-961400"
	I0415 17:43:38.869880    1632 addons.go:69] Setting cloud-spanner=true in profile "addons-961400"
	I0415 17:43:38.869880    1632 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-961400"
	I0415 17:43:38.869880    1632 addons.go:69] Setting default-storageclass=true in profile "addons-961400"
	I0415 17:43:38.869880    1632 addons.go:69] Setting gcp-auth=true in profile "addons-961400"
	I0415 17:43:38.869880    1632 addons.go:69] Setting helm-tiller=true in profile "addons-961400"
	I0415 17:43:38.869880    1632 addons.go:69] Setting ingress=true in profile "addons-961400"
	I0415 17:43:38.869880    1632 addons.go:69] Setting ingress-dns=true in profile "addons-961400"
	I0415 17:43:38.869880    1632 addons.go:69] Setting inspektor-gadget=true in profile "addons-961400"
	I0415 17:43:38.869880    1632 addons.go:69] Setting metrics-server=true in profile "addons-961400"
	I0415 17:43:38.869880    1632 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-961400"
	I0415 17:43:38.869880    1632 addons.go:69] Setting registry=true in profile "addons-961400"
	I0415 17:43:38.869880    1632 addons.go:69] Setting storage-provisioner=true in profile "addons-961400"
	I0415 17:43:38.869880    1632 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-961400"
	I0415 17:43:38.869880    1632 addons.go:69] Setting volumesnapshots=true in profile "addons-961400"
	I0415 17:43:38.870864    1632 config.go:182] Loaded profile config "addons-961400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 17:43:38.876883    1632 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-961400"
	I0415 17:43:38.876883    1632 mustload.go:65] Loading cluster: addons-961400
	I0415 17:43:38.876883    1632 addons.go:234] Setting addon storage-provisioner=true in "addons-961400"
	I0415 17:43:38.876883    1632 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-961400"
	I0415 17:43:38.876883    1632 host.go:66] Checking if "addons-961400" exists ...
	I0415 17:43:38.876883    1632 addons.go:234] Setting addon yakd=true in "addons-961400"
	I0415 17:43:38.876883    1632 host.go:66] Checking if "addons-961400" exists ...
	I0415 17:43:38.876883    1632 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-961400"
	I0415 17:43:38.876883    1632 host.go:66] Checking if "addons-961400" exists ...
	I0415 17:43:38.877869    1632 config.go:182] Loaded profile config "addons-961400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 17:43:38.877869    1632 addons.go:234] Setting addon ingress=true in "addons-961400"
	I0415 17:43:38.877869    1632 host.go:66] Checking if "addons-961400" exists ...
	I0415 17:43:38.877869    1632 addons.go:234] Setting addon ingress-dns=true in "addons-961400"
	I0415 17:43:38.877869    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:43:38.877869    1632 host.go:66] Checking if "addons-961400" exists ...
	I0415 17:43:38.877869    1632 addons.go:234] Setting addon volumesnapshots=true in "addons-961400"
	I0415 17:43:38.877869    1632 host.go:66] Checking if "addons-961400" exists ...
	I0415 17:43:38.878868    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:43:38.878868    1632 addons.go:234] Setting addon cloud-spanner=true in "addons-961400"
	I0415 17:43:38.878868    1632 host.go:66] Checking if "addons-961400" exists ...
	I0415 17:43:38.878868    1632 addons.go:234] Setting addon registry=true in "addons-961400"
	I0415 17:43:38.878868    1632 host.go:66] Checking if "addons-961400" exists ...
	I0415 17:43:38.878868    1632 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-961400"
	I0415 17:43:38.879870    1632 host.go:66] Checking if "addons-961400" exists ...
	I0415 17:43:38.879870    1632 addons.go:234] Setting addon inspektor-gadget=true in "addons-961400"
	I0415 17:43:38.879870    1632 host.go:66] Checking if "addons-961400" exists ...
	I0415 17:43:38.879870    1632 addons.go:234] Setting addon helm-tiller=true in "addons-961400"
	I0415 17:43:38.876883    1632 addons.go:234] Setting addon metrics-server=true in "addons-961400"
	I0415 17:43:38.879870    1632 host.go:66] Checking if "addons-961400" exists ...
	I0415 17:43:38.879870    1632 host.go:66] Checking if "addons-961400" exists ...
	I0415 17:43:38.880861    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:43:38.882873    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:43:38.883872    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:43:38.886979    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:43:38.887867    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:43:38.887867    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:43:38.887867    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:43:38.888858    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:43:38.893081    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:43:38.893548    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:43:38.893548    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:43:38.894514    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:43:38.902480    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:43:38.915740    1632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 17:43:39.627733    1632 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0415 17:43:40.076988    1632 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.1612387s)
	I0415 17:43:40.112989    1632 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 17:43:42.660588    1632 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.0328316s)
	I0415 17:43:42.660588    1632 start.go:946] {"host.minikube.internal": 172.19.48.1} host record injected into CoreDNS's ConfigMap
	I0415 17:43:42.668575    1632 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.5555652s)
	I0415 17:43:42.671572    1632 node_ready.go:35] waiting up to 6m0s for node "addons-961400" to be "Ready" ...
	I0415 17:43:42.843302    1632 node_ready.go:49] node "addons-961400" has status "Ready":"True"
	I0415 17:43:42.843302    1632 node_ready.go:38] duration metric: took 171.7294ms for node "addons-961400" to be "Ready" ...
	I0415 17:43:42.843302    1632 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 17:43:42.913297    1632 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-j9b94" in "kube-system" namespace to be "Ready" ...
	I0415 17:43:43.354481    1632 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-961400" context rescaled to 1 replicas
	I0415 17:43:44.946418    1632 pod_ready.go:102] pod "coredns-76f75df574-j9b94" in "kube-system" namespace has status "Ready":"False"
	I0415 17:43:45.598951    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:43:45.598951    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:45.604059    1632 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0415 17:43:45.601362    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:43:45.607775    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:45.610838    1632 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0415 17:43:45.608627    1632 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0415 17:43:45.618154    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0415 17:43:45.618154    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:43:45.622085    1632 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0415 17:43:45.622085    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0415 17:43:45.622147    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:43:45.622914    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:43:45.622914    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:45.629219    1632 addons.go:234] Setting addon default-storageclass=true in "addons-961400"
	I0415 17:43:45.629219    1632 host.go:66] Checking if "addons-961400" exists ...
	I0415 17:43:45.631219    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:43:45.706518    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:43:45.706518    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:45.711530    1632 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 17:43:45.710520    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:43:45.717626    1632 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 17:43:45.717626    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0415 17:43:45.711530    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:45.718520    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:43:45.723529    1632 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-961400"
	I0415 17:43:45.723529    1632 host.go:66] Checking if "addons-961400" exists ...
	I0415 17:43:45.724530    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:43:45.917653    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:43:45.917653    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:45.917653    1632 host.go:66] Checking if "addons-961400" exists ...
	I0415 17:43:45.973243    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:43:45.973243    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:46.001409    1632 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0415 17:43:45.979669    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:43:46.021614    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:46.025614    1632 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0415 17:43:46.022769    1632 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0415 17:43:46.029617    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:43:46.029748    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:46.029748    1632 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0415 17:43:46.029748    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0415 17:43:46.035729    1632 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0415 17:43:46.029748    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:43:46.029748    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0415 17:43:46.029748    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:43:46.043737    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:46.047196    1632 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0415 17:43:46.047196    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:43:46.062735    1632 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0415 17:43:46.069177    1632 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0415 17:43:46.055648    1632 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0415 17:43:46.076729    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:43:46.078701    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:46.083637    1632 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0415 17:43:46.078701    1632 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0415 17:43:46.078701    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0415 17:43:46.091278    1632 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0415 17:43:46.091278    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0415 17:43:46.091278    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:43:46.083751    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0415 17:43:46.091278    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:43:46.088806    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:43:46.449024    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:43:46.449024    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:46.454043    1632 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0415 17:43:46.464165    1632 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0415 17:43:46.487166    1632 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0415 17:43:46.487166    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:43:46.492226    1632 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0415 17:43:46.492226    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:46.492226    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:43:46.506233    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:46.525188    1632 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0415 17:43:46.512166    1632 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0415 17:43:46.512166    1632 out.go:177]   - Using image docker.io/registry:2.8.3
	I0415 17:43:46.513164    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:43:46.535154    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:46.558676    1632 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0415 17:43:46.536896    1632 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0415 17:43:46.597176    1632 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0415 17:43:46.606643    1632 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0415 17:43:46.600836    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0415 17:43:46.600836    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0415 17:43:46.614836    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:43:46.625884    1632 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0415 17:43:46.630855    1632 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0415 17:43:46.631521    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:43:46.634494    1632 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0415 17:43:46.642519    1632 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0415 17:43:46.642519    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0415 17:43:46.642519    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:43:46.665283    1632 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0415 17:43:46.665385    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0415 17:43:46.665484    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:43:47.264975    1632 pod_ready.go:102] pod "coredns-76f75df574-j9b94" in "kube-system" namespace has status "Ready":"False"
	I0415 17:43:49.359911    1632 pod_ready.go:102] pod "coredns-76f75df574-j9b94" in "kube-system" namespace has status "Ready":"False"
	I0415 17:43:51.610883    1632 pod_ready.go:102] pod "coredns-76f75df574-j9b94" in "kube-system" namespace has status "Ready":"False"
	I0415 17:43:51.693587    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:43:51.694474    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:51.697488    1632 out.go:177]   - Using image docker.io/busybox:stable
	I0415 17:43:51.701467    1632 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0415 17:43:51.717723    1632 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0415 17:43:51.717723    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0415 17:43:51.717723    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:43:51.832713    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:43:51.832713    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:51.833672    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:43:51.964367    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:43:51.964367    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:51.964367    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:43:51.971749    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:43:51.971749    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:51.971749    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:43:52.004077    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:43:52.004077    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:52.004077    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:43:52.047183    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:43:52.047183    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:52.047183    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:43:52.580217    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:43:52.580217    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:52.580217    1632 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0415 17:43:52.580217    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0415 17:43:52.580217    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:43:52.625023    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:43:52.625023    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:52.625023    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:43:52.659865    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:43:52.659865    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:52.659865    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:43:53.021098    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:43:53.021098    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:53.022096    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:43:53.833629    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:43:53.833629    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:53.833629    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:43:53.847002    1632 pod_ready.go:102] pod "coredns-76f75df574-j9b94" in "kube-system" namespace has status "Ready":"False"
	I0415 17:43:54.510025    1632 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0415 17:43:54.510025    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:43:54.680493    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:43:54.680493    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:54.680892    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:43:54.895203    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:43:54.895203    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:54.895203    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:43:54.910204    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:43:54.910204    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:54.910204    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:43:55.958006    1632 pod_ready.go:102] pod "coredns-76f75df574-j9b94" in "kube-system" namespace has status "Ready":"False"
	I0415 17:43:57.958547    1632 pod_ready.go:102] pod "coredns-76f75df574-j9b94" in "kube-system" namespace has status "Ready":"False"
	I0415 17:43:58.394793    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:43:58.394793    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:58.394793    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:43:58.634114    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:43:58.634114    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:58.636129    1632 sshutil.go:53] new ssh client: &{IP:172.19.57.138 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-961400\id_rsa Username:docker}
	I0415 17:43:58.959076    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:43:58.959076    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:58.959076    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:43:59.067825    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0415 17:43:59.080019    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:43:59.080019    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:59.080797    1632 sshutil.go:53] new ssh client: &{IP:172.19.57.138 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-961400\id_rsa Username:docker}
	I0415 17:43:59.339616    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:43:59.339616    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:59.341237    1632 sshutil.go:53] new ssh client: &{IP:172.19.57.138 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-961400\id_rsa Username:docker}
	I0415 17:43:59.459193    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:43:59.459193    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:59.459794    1632 sshutil.go:53] new ssh client: &{IP:172.19.57.138 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-961400\id_rsa Username:docker}
	I0415 17:43:59.593047    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0415 17:43:59.695156    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:43:59.695156    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:59.696165    1632 sshutil.go:53] new ssh client: &{IP:172.19.57.138 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-961400\id_rsa Username:docker}
	I0415 17:43:59.848837    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:43:59.848837    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:43:59.848837    1632 sshutil.go:53] new ssh client: &{IP:172.19.57.138 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-961400\id_rsa Username:docker}
	I0415 17:43:59.959295    1632 pod_ready.go:102] pod "coredns-76f75df574-j9b94" in "kube-system" namespace has status "Ready":"False"
	I0415 17:44:00.063777    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:44:00.063777    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:44:00.064736    1632 sshutil.go:53] new ssh client: &{IP:172.19.57.138 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-961400\id_rsa Username:docker}
	I0415 17:44:00.095756    1632 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0415 17:44:00.095756    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0415 17:44:00.135477    1632 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0415 17:44:00.135477    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0415 17:44:00.183621    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:44:00.183621    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:44:00.184624    1632 sshutil.go:53] new ssh client: &{IP:172.19.57.138 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-961400\id_rsa Username:docker}
	I0415 17:44:00.308966    1632 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0415 17:44:00.309133    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0415 17:44:00.345650    1632 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0415 17:44:00.345752    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0415 17:44:00.437903    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 17:44:00.543908    1632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.4760703s)
	I0415 17:44:00.566771    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:44:00.566771    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:44:00.566771    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:44:00.567774    1632 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0415 17:44:00.567774    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0415 17:44:00.630200    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0415 17:44:00.631374    1632 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0415 17:44:00.631374    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0415 17:44:00.675419    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0415 17:44:00.769706    1632 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0415 17:44:00.769779    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0415 17:44:00.826294    1632 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0415 17:44:00.826294    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0415 17:44:00.939176    1632 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0415 17:44:00.939176    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0415 17:44:00.983043    1632 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0415 17:44:00.983230    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0415 17:44:01.058789    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:44:01.058789    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:44:01.060008    1632 sshutil.go:53] new ssh client: &{IP:172.19.57.138 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-961400\id_rsa Username:docker}
	I0415 17:44:01.143900    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:44:01.143900    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:44:01.144644    1632 sshutil.go:53] new ssh client: &{IP:172.19.57.138 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-961400\id_rsa Username:docker}
	I0415 17:44:01.202740    1632 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0415 17:44:01.202831    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0415 17:44:01.221700    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:44:01.221700    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:44:01.222522    1632 sshutil.go:53] new ssh client: &{IP:172.19.57.138 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-961400\id_rsa Username:docker}
	I0415 17:44:01.255900    1632 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0415 17:44:01.256057    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0415 17:44:01.273231    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:44:01.273231    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:44:01.274745    1632 sshutil.go:53] new ssh client: &{IP:172.19.57.138 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-961400\id_rsa Username:docker}
	I0415 17:44:01.307209    1632 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0415 17:44:01.307330    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0415 17:44:01.338615    1632 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0415 17:44:01.338615    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0415 17:44:01.512166    1632 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0415 17:44:01.512253    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0415 17:44:01.574050    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0415 17:44:01.636043    1632 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0415 17:44:01.636043    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0415 17:44:01.856180    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0415 17:44:02.003306    1632 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0415 17:44:02.003306    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0415 17:44:02.011312    1632 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0415 17:44:02.011312    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0415 17:44:02.014317    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0415 17:44:02.041831    1632 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0415 17:44:02.042005    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0415 17:44:02.123124    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0415 17:44:02.150255    1632 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0415 17:44:02.150374    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0415 17:44:02.195140    1632 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0415 17:44:02.195209    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0415 17:44:02.318457    1632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.7253891s)
	I0415 17:44:02.446046    1632 pod_ready.go:102] pod "coredns-76f75df574-j9b94" in "kube-system" namespace has status "Ready":"False"
	I0415 17:44:02.489391    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:44:02.489391    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:44:02.490204    1632 sshutil.go:53] new ssh client: &{IP:172.19.57.138 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-961400\id_rsa Username:docker}
	I0415 17:44:02.550304    1632 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0415 17:44:02.550370    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0415 17:44:02.553930    1632 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0415 17:44:02.553930    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0415 17:44:02.588052    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:44:02.589004    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:44:02.589941    1632 sshutil.go:53] new ssh client: &{IP:172.19.57.138 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-961400\id_rsa Username:docker}
	I0415 17:44:02.608230    1632 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0415 17:44:02.608298    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0415 17:44:02.806220    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0415 17:44:02.998623    1632 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0415 17:44:02.998623    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0415 17:44:03.064796    1632 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0415 17:44:03.064796    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0415 17:44:03.066799    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0415 17:44:03.338535    1632 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0415 17:44:03.338610    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0415 17:44:03.811338    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:44:03.811338    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:44:03.812612    1632 sshutil.go:53] new ssh client: &{IP:172.19.57.138 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-961400\id_rsa Username:docker}
	I0415 17:44:04.029474    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0415 17:44:04.067951    1632 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0415 17:44:04.068043    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0415 17:44:04.562720    1632 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0415 17:44:04.563509    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0415 17:44:04.887579    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0415 17:44:04.977028    1632 pod_ready.go:102] pod "coredns-76f75df574-j9b94" in "kube-system" namespace has status "Ready":"False"
	I0415 17:44:05.488334    1632 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0415 17:44:05.488520    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0415 17:44:05.525407    1632 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0415 17:44:06.518835    1632 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0415 17:44:06.518929    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0415 17:44:06.534131    1632 addons.go:234] Setting addon gcp-auth=true in "addons-961400"
	I0415 17:44:06.534281    1632 host.go:66] Checking if "addons-961400" exists ...
	I0415 17:44:06.535868    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:44:07.096215    1632 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0415 17:44:07.096333    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0415 17:44:07.498066    1632 pod_ready.go:102] pod "coredns-76f75df574-j9b94" in "kube-system" namespace has status "Ready":"False"
	I0415 17:44:07.686694    1632 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0415 17:44:07.686795    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0415 17:44:08.124872    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0415 17:44:08.961410    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:44:08.962177    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:44:08.976605    1632 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0415 17:44:08.976605    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-961400 ).state
	I0415 17:44:09.611297    1632 pod_ready.go:102] pod "coredns-76f75df574-j9b94" in "kube-system" namespace has status "Ready":"False"
	I0415 17:44:10.820303    1632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.3813164s)
	I0415 17:44:10.820303    1632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (10.1900224s)
	I0415 17:44:11.411563    1632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 17:44:11.411563    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:44:11.412533    1632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-961400 ).networkadapters[0]).ipaddresses[0]
	I0415 17:44:11.894927    1632 pod_ready.go:102] pod "coredns-76f75df574-j9b94" in "kube-system" namespace has status "Ready":"False"
	I0415 17:44:13.935700    1632 pod_ready.go:102] pod "coredns-76f75df574-j9b94" in "kube-system" namespace has status "Ready":"False"
	I0415 17:44:14.331428    1632 main.go:141] libmachine: [stdout =====>] : 172.19.57.138
	
	I0415 17:44:14.331428    1632 main.go:141] libmachine: [stderr =====>] : 
	I0415 17:44:14.332461    1632 sshutil.go:53] new ssh client: &{IP:172.19.57.138 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-961400\id_rsa Username:docker}
	I0415 17:44:14.772654    1632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (14.0961252s)
	I0415 17:44:14.772654    1632 addons.go:470] Verifying addon ingress=true in "addons-961400"
	I0415 17:44:14.772654    1632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (13.1974984s)
	I0415 17:44:14.776611    1632 out.go:177] * Verifying ingress addon...
	I0415 17:44:14.772654    1632 addons.go:470] Verifying addon metrics-server=true in "addons-961400"
	I0415 17:44:14.772654    1632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (12.9163702s)
	I0415 17:44:14.772654    1632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (12.7582348s)
	I0415 17:44:14.772654    1632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (12.6494283s)
	I0415 17:44:14.773625    1632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.9673085s)
	I0415 17:44:14.773625    1632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.7067321s)
	I0415 17:44:14.773625    1632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.7431164s)
	I0415 17:44:14.773625    1632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.8859664s)
	I0415 17:44:14.782227    1632 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-961400 service yakd-dashboard -n yakd-dashboard
	
	W0415 17:44:14.780259    1632 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0415 17:44:14.780259    1632 addons.go:470] Verifying addon registry=true in "addons-961400"
	I0415 17:44:14.782853    1632 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0415 17:44:14.789831    1632 retry.go:31] will retry after 210.931146ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0415 17:44:14.795017    1632 out.go:177] * Verifying registry addon...
	I0415 17:44:14.804201    1632 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0415 17:44:14.827014    1632 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0415 17:44:14.827014    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:14.832652    1632 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0415 17:44:14.832954    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0415 17:44:14.850988    1632 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0415 17:44:15.027193    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0415 17:44:15.305984    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:15.313493    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:15.823187    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:15.852754    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:15.941260    1632 pod_ready.go:102] pod "coredns-76f75df574-j9b94" in "kube-system" namespace has status "Ready":"False"
	I0415 17:44:16.326473    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:16.336031    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:16.838972    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:16.840002    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:17.328337    1632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.2033102s)
	I0415 17:44:17.328393    1632 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-961400"
	I0415 17:44:17.328466    1632 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (8.3517213s)
	I0415 17:44:17.333283    1632 out.go:177] * Verifying csi-hostpath-driver addon...
	I0415 17:44:17.341284    1632 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0415 17:44:17.340598    1632 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0415 17:44:17.345499    1632 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0415 17:44:17.348462    1632 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0415 17:44:17.348462    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0415 17:44:17.362506    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:17.363173    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:17.493272    1632 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0415 17:44:17.493272    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0415 17:44:17.577112    1632 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0415 17:44:17.577112    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:17.636087    1632 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0415 17:44:17.636087    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0415 17:44:17.787242    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0415 17:44:17.798245    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:17.821839    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:17.853993    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:18.312242    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:18.319316    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:18.363193    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:18.424873    1632 pod_ready.go:102] pod "coredns-76f75df574-j9b94" in "kube-system" namespace has status "Ready":"False"
	I0415 17:44:18.803008    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:18.819580    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:18.872323    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:18.996105    1632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.9688801s)
	I0415 17:44:19.304682    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:19.311747    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:19.370026    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:19.658393    1632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.8710812s)
	I0415 17:44:19.666906    1632 addons.go:470] Verifying addon gcp-auth=true in "addons-961400"
	I0415 17:44:19.670499    1632 out.go:177] * Verifying gcp-auth addon...
	I0415 17:44:19.673756    1632 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0415 17:44:19.693338    1632 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0415 17:44:19.693507    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:19.808893    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:19.814972    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:19.858554    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:20.188204    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:20.301911    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:20.317161    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:20.365280    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:20.429898    1632 pod_ready.go:102] pod "coredns-76f75df574-j9b94" in "kube-system" namespace has status "Ready":"False"
	I0415 17:44:20.679879    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:20.813397    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:20.821792    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:20.859627    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:21.188036    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:21.299816    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:21.316914    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:21.366732    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:21.693256    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:21.806696    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:21.817038    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:21.850825    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:22.184864    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:22.313186    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:22.313292    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:22.360793    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:22.686412    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:22.799414    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:22.814498    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:22.863410    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:22.926879    1632 pod_ready.go:102] pod "coredns-76f75df574-j9b94" in "kube-system" namespace has status "Ready":"False"
	I0415 17:44:23.192554    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:23.306542    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:23.311516    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:23.354283    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:23.685920    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:23.797316    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:23.812949    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:23.863929    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:24.180149    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:24.307948    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:24.317143    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:24.356551    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:24.688800    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:24.801120    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:24.816447    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:24.865076    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:24.931358    1632 pod_ready.go:102] pod "coredns-76f75df574-j9b94" in "kube-system" namespace has status "Ready":"False"
	I0415 17:44:25.179419    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:25.307154    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:25.310957    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:25.353824    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:25.687339    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:25.814852    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:25.815885    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:25.859202    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:26.190410    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:26.298599    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:26.317694    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:26.363713    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:26.686137    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:26.797589    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:26.812590    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:26.865032    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:27.201424    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:27.398543    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:27.399060    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:27.402273    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:27.427237    1632 pod_ready.go:102] pod "coredns-76f75df574-j9b94" in "kube-system" namespace has status "Ready":"False"
	I0415 17:44:27.690574    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:27.801306    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:27.816567    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:27.865784    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:28.189892    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:28.307838    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:28.315924    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:28.365907    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:28.683340    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:28.811950    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:28.812584    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:28.859650    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:29.189064    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:29.305905    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:29.312597    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:29.368265    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:29.438125    1632 pod_ready.go:102] pod "coredns-76f75df574-j9b94" in "kube-system" namespace has status "Ready":"False"
	I0415 17:44:29.695944    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:29.802256    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:29.827300    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:29.876559    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:30.212923    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:30.301956    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:30.319880    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:30.365961    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:30.681760    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:30.819027    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:30.821331    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:30.888706    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:30.922978    1632 pod_ready.go:92] pod "coredns-76f75df574-j9b94" in "kube-system" namespace has status "Ready":"True"
	I0415 17:44:30.922978    1632 pod_ready.go:81] duration metric: took 48.0092976s for pod "coredns-76f75df574-j9b94" in "kube-system" namespace to be "Ready" ...
	I0415 17:44:30.922978    1632 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-k6sqd" in "kube-system" namespace to be "Ready" ...
	I0415 17:44:30.926092    1632 pod_ready.go:97] error getting pod "coredns-76f75df574-k6sqd" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-k6sqd" not found
	I0415 17:44:30.926092    1632 pod_ready.go:81] duration metric: took 3.1147ms for pod "coredns-76f75df574-k6sqd" in "kube-system" namespace to be "Ready" ...
	E0415 17:44:30.926092    1632 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-k6sqd" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-k6sqd" not found
	I0415 17:44:30.926092    1632 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-961400" in "kube-system" namespace to be "Ready" ...
	I0415 17:44:30.933772    1632 pod_ready.go:92] pod "etcd-addons-961400" in "kube-system" namespace has status "Ready":"True"
	I0415 17:44:30.933880    1632 pod_ready.go:81] duration metric: took 7.7876ms for pod "etcd-addons-961400" in "kube-system" namespace to be "Ready" ...
	I0415 17:44:30.933880    1632 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-961400" in "kube-system" namespace to be "Ready" ...
	I0415 17:44:30.941134    1632 pod_ready.go:92] pod "kube-apiserver-addons-961400" in "kube-system" namespace has status "Ready":"True"
	I0415 17:44:30.941134    1632 pod_ready.go:81] duration metric: took 7.254ms for pod "kube-apiserver-addons-961400" in "kube-system" namespace to be "Ready" ...
	I0415 17:44:30.941721    1632 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-961400" in "kube-system" namespace to be "Ready" ...
	I0415 17:44:30.950169    1632 pod_ready.go:92] pod "kube-controller-manager-addons-961400" in "kube-system" namespace has status "Ready":"True"
	I0415 17:44:30.950169    1632 pod_ready.go:81] duration metric: took 8.4478ms for pod "kube-controller-manager-addons-961400" in "kube-system" namespace to be "Ready" ...
	I0415 17:44:30.950169    1632 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jpk2b" in "kube-system" namespace to be "Ready" ...
	I0415 17:44:31.125230    1632 pod_ready.go:92] pod "kube-proxy-jpk2b" in "kube-system" namespace has status "Ready":"True"
	I0415 17:44:31.125230    1632 pod_ready.go:81] duration metric: took 175.0597ms for pod "kube-proxy-jpk2b" in "kube-system" namespace to be "Ready" ...
	I0415 17:44:31.125230    1632 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-961400" in "kube-system" namespace to be "Ready" ...
	I0415 17:44:31.188737    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:31.300890    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:31.315723    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:31.365459    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:31.523102    1632 pod_ready.go:92] pod "kube-scheduler-addons-961400" in "kube-system" namespace has status "Ready":"True"
	I0415 17:44:31.523102    1632 pod_ready.go:81] duration metric: took 397.8691ms for pod "kube-scheduler-addons-961400" in "kube-system" namespace to be "Ready" ...
	I0415 17:44:31.523102    1632 pod_ready.go:38] duration metric: took 48.6794115s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 17:44:31.523263    1632 api_server.go:52] waiting for apiserver process to appear ...
	I0415 17:44:31.538386    1632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 17:44:31.571208    1632 api_server.go:72] duration metric: took 52.7009084s to wait for apiserver process to appear ...
	I0415 17:44:31.571324    1632 api_server.go:88] waiting for apiserver healthz status ...
	I0415 17:44:31.571414    1632 api_server.go:253] Checking apiserver healthz at https://172.19.57.138:8443/healthz ...
	I0415 17:44:31.578554    1632 api_server.go:279] https://172.19.57.138:8443/healthz returned 200:
	ok
	I0415 17:44:31.581095    1632 api_server.go:141] control plane version: v1.29.3
	I0415 17:44:31.581095    1632 api_server.go:131] duration metric: took 9.7711ms to wait for apiserver health ...
	I0415 17:44:31.581095    1632 system_pods.go:43] waiting for kube-system pods to appear ...
	I0415 17:44:31.681620    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:31.739223    1632 system_pods.go:59] 18 kube-system pods found
	I0415 17:44:31.739223    1632 system_pods.go:61] "coredns-76f75df574-j9b94" [e540ce24-98f4-4d71-82d3-0c0bfbc544b0] Running
	I0415 17:44:31.739223    1632 system_pods.go:61] "csi-hostpath-attacher-0" [8f665389-c958-4dba-bd92-f6f20526255b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0415 17:44:31.739223    1632 system_pods.go:61] "csi-hostpath-resizer-0" [c7ae49b9-cf49-451a-8a9c-16d6d90ef5b2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0415 17:44:31.739223    1632 system_pods.go:61] "csi-hostpathplugin-bvh7t" [81866781-98b0-417f-97d6-95e14f445c1b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0415 17:44:31.739223    1632 system_pods.go:61] "etcd-addons-961400" [266c4ad6-6d4d-43a9-a3e8-df23378393f1] Running
	I0415 17:44:31.739223    1632 system_pods.go:61] "kube-apiserver-addons-961400" [6c70b350-b70f-4a95-893b-ee9cc3506a22] Running
	I0415 17:44:31.739223    1632 system_pods.go:61] "kube-controller-manager-addons-961400" [b2e1c129-74f1-461c-bc58-6f86b8723cd3] Running
	I0415 17:44:31.739223    1632 system_pods.go:61] "kube-ingress-dns-minikube" [ab5209bb-5701-404a-a2c4-66f72142db30] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0415 17:44:31.739223    1632 system_pods.go:61] "kube-proxy-jpk2b" [3ea5225d-0983-43d9-9a94-0d6cbd16fba5] Running
	I0415 17:44:31.739223    1632 system_pods.go:61] "kube-scheduler-addons-961400" [4408a84b-5dd5-49f7-a9f5-e863852a6cb4] Running
	I0415 17:44:31.739223    1632 system_pods.go:61] "metrics-server-75d6c48ddd-cccrf" [9b4bd2c2-1db7-4ed4-bc57-9ec713d519da] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0415 17:44:31.739223    1632 system_pods.go:61] "nvidia-device-plugin-daemonset-pczqg" [aee0ca2a-fbc4-4036-9d10-7bd560b85a6b] Running
	I0415 17:44:31.739223    1632 system_pods.go:61] "registry-fhvrv" [c16cb697-f687-4d6e-a843-78d29be15574] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0415 17:44:31.739223    1632 system_pods.go:61] "registry-proxy-8s68r" [a017a726-3edd-4a71-a68d-edcc93eb94e3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0415 17:44:31.739223    1632 system_pods.go:61] "snapshot-controller-58dbcc7b99-8jncp" [36549c65-08e4-4c97-b375-17fb1488fff7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0415 17:44:31.739223    1632 system_pods.go:61] "snapshot-controller-58dbcc7b99-r6vd8" [06f948c3-531c-4d44-87ea-64056ba4dc8f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0415 17:44:31.739223    1632 system_pods.go:61] "storage-provisioner" [fe46d5c7-7245-42a2-8452-8f93f822de4b] Running
	I0415 17:44:31.739223    1632 system_pods.go:61] "tiller-deploy-7b677967b9-c97nd" [f5730ee6-1646-4c69-a454-1c22681d47f0] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0415 17:44:31.739223    1632 system_pods.go:74] duration metric: took 158.1263ms to wait for pod list to return data ...
	I0415 17:44:31.739223    1632 default_sa.go:34] waiting for default service account to be created ...
	I0415 17:44:31.811286    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:31.820011    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:31.860772    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:31.933036    1632 default_sa.go:45] found service account: "default"
	I0415 17:44:31.933102    1632 default_sa.go:55] duration metric: took 193.8772ms for default service account to be created ...
	I0415 17:44:31.933170    1632 system_pods.go:116] waiting for k8s-apps to be running ...
	I0415 17:44:32.135601    1632 system_pods.go:86] 18 kube-system pods found
	I0415 17:44:32.136122    1632 system_pods.go:89] "coredns-76f75df574-j9b94" [e540ce24-98f4-4d71-82d3-0c0bfbc544b0] Running
	I0415 17:44:32.136122    1632 system_pods.go:89] "csi-hostpath-attacher-0" [8f665389-c958-4dba-bd92-f6f20526255b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0415 17:44:32.136122    1632 system_pods.go:89] "csi-hostpath-resizer-0" [c7ae49b9-cf49-451a-8a9c-16d6d90ef5b2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0415 17:44:32.136208    1632 system_pods.go:89] "csi-hostpathplugin-bvh7t" [81866781-98b0-417f-97d6-95e14f445c1b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0415 17:44:32.136208    1632 system_pods.go:89] "etcd-addons-961400" [266c4ad6-6d4d-43a9-a3e8-df23378393f1] Running
	I0415 17:44:32.136208    1632 system_pods.go:89] "kube-apiserver-addons-961400" [6c70b350-b70f-4a95-893b-ee9cc3506a22] Running
	I0415 17:44:32.136208    1632 system_pods.go:89] "kube-controller-manager-addons-961400" [b2e1c129-74f1-461c-bc58-6f86b8723cd3] Running
	I0415 17:44:32.136208    1632 system_pods.go:89] "kube-ingress-dns-minikube" [ab5209bb-5701-404a-a2c4-66f72142db30] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0415 17:44:32.136208    1632 system_pods.go:89] "kube-proxy-jpk2b" [3ea5225d-0983-43d9-9a94-0d6cbd16fba5] Running
	I0415 17:44:32.136208    1632 system_pods.go:89] "kube-scheduler-addons-961400" [4408a84b-5dd5-49f7-a9f5-e863852a6cb4] Running
	I0415 17:44:32.136208    1632 system_pods.go:89] "metrics-server-75d6c48ddd-cccrf" [9b4bd2c2-1db7-4ed4-bc57-9ec713d519da] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0415 17:44:32.136208    1632 system_pods.go:89] "nvidia-device-plugin-daemonset-pczqg" [aee0ca2a-fbc4-4036-9d10-7bd560b85a6b] Running
	I0415 17:44:32.136208    1632 system_pods.go:89] "registry-fhvrv" [c16cb697-f687-4d6e-a843-78d29be15574] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0415 17:44:32.136208    1632 system_pods.go:89] "registry-proxy-8s68r" [a017a726-3edd-4a71-a68d-edcc93eb94e3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0415 17:44:32.136208    1632 system_pods.go:89] "snapshot-controller-58dbcc7b99-8jncp" [36549c65-08e4-4c97-b375-17fb1488fff7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0415 17:44:32.136208    1632 system_pods.go:89] "snapshot-controller-58dbcc7b99-r6vd8" [06f948c3-531c-4d44-87ea-64056ba4dc8f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0415 17:44:32.136208    1632 system_pods.go:89] "storage-provisioner" [fe46d5c7-7245-42a2-8452-8f93f822de4b] Running
	I0415 17:44:32.136208    1632 system_pods.go:89] "tiller-deploy-7b677967b9-c97nd" [f5730ee6-1646-4c69-a454-1c22681d47f0] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0415 17:44:32.136208    1632 system_pods.go:126] duration metric: took 203.0365ms to wait for k8s-apps to be running ...
	I0415 17:44:32.136208    1632 system_svc.go:44] waiting for kubelet service to be running ....
	I0415 17:44:32.149682    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 17:44:32.185284    1632 system_svc.go:56] duration metric: took 49.0752ms WaitForService to wait for kubelet
	I0415 17:44:32.185284    1632 kubeadm.go:576] duration metric: took 53.3149789s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 17:44:32.185284    1632 node_conditions.go:102] verifying NodePressure condition ...
	I0415 17:44:32.195994    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:32.297754    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:32.313963    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:32.328444    1632 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0415 17:44:32.328503    1632 node_conditions.go:123] node cpu capacity is 2
	I0415 17:44:32.328731    1632 node_conditions.go:105] duration metric: took 143.4463ms to run NodePressure ...
	I0415 17:44:32.328731    1632 start.go:240] waiting for startup goroutines ...
	I0415 17:44:32.362543    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:32.692943    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:32.804000    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:32.822630    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:32.875709    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:33.183830    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:33.302381    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:33.311910    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:33.361044    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:33.689826    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:33.801633    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:33.826689    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:33.865431    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:34.180875    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:34.310613    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:34.316295    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:34.360206    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:34.692049    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:34.803662    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:34.819614    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:34.865584    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:35.184358    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:35.313489    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:35.313622    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:35.755605    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:35.756885    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:35.798014    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:35.811678    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:35.871912    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:36.335759    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:36.342384    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:36.342384    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:36.550383    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:36.791231    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:36.797143    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:36.813410    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:36.872741    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:37.193978    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:37.304733    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:37.311088    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:37.357765    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:37.683447    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:37.808408    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:37.813317    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:37.856542    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:38.187200    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:38.299879    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:38.314583    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:38.362926    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:38.691796    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:38.804697    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:38.809922    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:38.852106    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:39.184105    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:39.302666    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:39.314999    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:39.363492    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:39.690666    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:39.803811    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:39.818142    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:39.852641    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:40.183512    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:40.310688    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:40.315481    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:40.357795    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:40.690022    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:41.103652    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:41.106050    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:41.106777    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:41.194711    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:41.301508    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:41.317012    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:41.365251    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:41.693103    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:41.803952    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:41.819052    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:41.867765    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:42.182473    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:42.310595    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:42.316701    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:42.358905    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:42.690729    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:42.801397    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:42.815508    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:42.864645    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:43.189658    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:43.300481    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:43.317647    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:43.365096    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:43.694044    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:43.805878    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:43.816387    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:43.853837    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:44.184745    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:44.298596    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:44.317925    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:44.361790    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:44.693640    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:44.808215    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:44.814219    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:44.864584    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:45.185133    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:45.314050    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:45.314050    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:45.360889    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:45.687591    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:45.799771    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:45.815943    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:45.861056    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:46.190286    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:46.302278    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:46.316976    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:46.365048    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:46.681396    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:46.808707    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:46.814593    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:46.856868    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:47.185292    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:47.297990    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:47.312234    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:47.361809    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:47.688563    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:48.299480    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:48.299861    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:48.299861    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:48.303505    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:48.306653    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:48.312175    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:48.367072    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:48.683252    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:48.847918    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:48.871326    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:48.898336    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:49.192256    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:49.339815    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:49.347194    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:49.358938    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:49.685304    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:49.801365    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:49.814635    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:49.878285    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:50.203091    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:50.306419    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:50.314567    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:50.373738    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:50.683323    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:50.838381    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:50.840080    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:50.864427    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:51.190207    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:51.302781    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:51.320820    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:51.353163    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:51.682495    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:51.810907    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:51.815156    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:51.858072    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:52.188426    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:52.303299    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:52.317526    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:52.367198    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:52.680676    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:52.807661    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:52.813850    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:52.853882    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:53.186633    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:53.298370    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:53.314044    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:53.361786    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:53.693466    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:53.813091    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:53.825291    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:53.854836    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:54.185066    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:54.298463    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:54.315102    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:54.363103    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:54.694380    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:54.808047    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:54.813226    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:54.861833    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:55.193468    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:55.306137    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:55.310134    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:55.353175    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:55.682180    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:55.810082    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:55.815429    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:55.859558    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:56.191620    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:56.303765    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:56.318188    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:56.350755    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:56.686309    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:56.798413    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:56.814406    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:56.862293    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:57.179975    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:57.308497    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:57.310415    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:57.354759    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:57.685434    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:57.798851    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:57.813433    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:57.863186    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:58.178913    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:58.356400    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:58.360763    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:58.364788    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:58.689637    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:58.799019    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:58.816039    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:58.862240    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:59.185266    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:59.308688    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:59.313784    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:59.357671    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:44:59.686509    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:44:59.799495    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:44:59.813854    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:44:59.864841    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:00.193300    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:00.305178    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:00.310125    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:00.352375    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:00.684964    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:00.814343    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:00.814523    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:00.859145    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:01.190211    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:01.301899    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:01.318899    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:01.364308    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:01.680990    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:01.811899    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:01.817429    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:01.860853    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:02.188350    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:02.299923    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:02.315017    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:02.364640    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:02.684158    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:02.812695    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:02.815524    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:02.859444    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:03.192329    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:03.306647    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:03.313355    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:03.354170    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:03.681934    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:03.802299    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:03.812115    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:03.862182    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:04.181563    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:04.307505    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:04.317122    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:04.355244    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:04.950544    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:04.951177    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:04.951591    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:04.954729    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:05.184437    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:05.305351    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:05.311378    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:05.354648    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:05.681005    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:05.809200    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:05.813837    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:05.856254    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:06.183866    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:06.313416    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:06.317025    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:06.360282    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:06.687400    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:06.797709    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:06.813382    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:06.861791    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:07.191636    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:07.301904    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:07.319121    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:07.369459    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:07.693868    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:07.807025    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:07.813528    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:07.866965    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:08.185147    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:08.316738    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:08.317891    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:08.361554    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:08.691355    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:08.801655    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:08.821898    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:08.871329    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:10.143918    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:11.068079    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:11.074466    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:11.075805    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:11.079452    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:11.084167    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:11.084167    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:11.088765    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:11.090174    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:11.195975    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:11.306925    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:11.312931    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:11.353412    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:11.687767    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:11.811650    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:11.815071    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:11.871138    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:12.191135    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:12.302217    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:12.319477    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:12.375073    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:12.694411    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:12.808335    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:12.814935    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:12.856165    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:13.186603    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:13.301081    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:13.314659    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:13.367040    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:13.905731    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:13.909514    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:13.918216    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:13.919971    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:14.184929    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:14.307934    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:14.312363    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:14.365681    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:14.692749    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:14.806167    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:14.812666    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:14.854819    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:15.181814    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:15.351005    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:15.357836    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:15.358022    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:15.715212    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:15.804084    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:15.817642    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:15.871653    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:16.196079    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:16.316684    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:16.320942    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:16.356874    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:16.691551    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:16.802526    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:16.817160    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:16.868309    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:17.182177    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:17.314161    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:17.343779    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:17.360760    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:17.693766    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:17.807782    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:17.811781    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:17.854766    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:18.186055    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:18.296636    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:18.312635    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:18.361633    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:18.694267    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:18.805886    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:18.810878    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:18.853875    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:19.188682    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:19.302259    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:19.315848    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:19.371853    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:19.682124    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:19.813357    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:19.813357    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:19.859974    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:20.188744    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:20.299893    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:20.317057    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:20.364238    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:20.692144    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:20.808275    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:20.812919    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:20.866747    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:21.182926    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:21.311923    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:21.314693    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:21.360021    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:21.691426    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:21.805729    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:21.810912    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:21.855615    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:22.185824    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:22.298227    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:22.313390    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:22.362550    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:22.695007    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:22.806666    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:22.811343    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:22.855850    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:23.188471    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:23.300776    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:23.313475    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:23.364783    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:23.681244    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:23.911797    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:23.911797    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:23.912785    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:24.408245    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:24.408427    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:24.408942    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:24.409607    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:24.740517    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:24.813351    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:24.815310    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:24.860516    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:25.305838    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:25.311990    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:25.317391    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:25.359773    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:26.039490    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:26.045226    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:26.046801    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:26.051040    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:26.189514    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:26.305046    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:26.312171    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:26.365989    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:26.693050    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:26.813311    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:26.818132    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:26.862832    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:27.238225    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:27.322257    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:27.326305    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:27.367157    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:27.693602    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:27.806856    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:27.817087    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:27.867866    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:28.196184    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:28.308491    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:28.312072    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:28.356384    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:28.690711    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:28.807689    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:29.192024    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:29.391477    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:29.400478    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:29.400478    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:29.400478    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:29.411201    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:29.691755    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:29.819036    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:29.824607    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:29.860846    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:30.191488    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:30.304489    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:30.318065    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:30.351656    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:30.681801    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:30.814946    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:30.815012    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:30.872857    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:31.193817    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:31.305476    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:31.317494    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:31.351056    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:31.683950    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:31.816552    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:31.817242    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:31.861022    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:32.190318    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:32.304542    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:32.317173    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:32.367001    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:32.685403    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:32.810107    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:32.814623    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:32.856752    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:33.186150    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:33.300050    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:33.314171    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:33.364754    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:33.710005    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:33.818627    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:33.818627    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:33.861249    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:34.193522    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:34.305572    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:34.311280    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:34.353391    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:34.687803    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:34.799948    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:34.814658    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:34.863102    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:35.180176    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:35.309403    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:35.314345    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:35.357719    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:35.688613    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:35.803220    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:35.815939    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:35.865282    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:36.195811    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:36.308428    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:36.313828    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:36.357330    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:36.689634    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:36.802254    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:36.817273    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:36.867044    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:37.183444    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:37.314291    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:37.315618    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:37.360789    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:37.693709    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:37.804513    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:37.820391    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:37.854003    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:38.184563    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:38.311687    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:38.321712    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:38.360676    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:38.691052    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:38.804720    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:38.818079    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:38.867927    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:39.915584    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:39.915584    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:39.915584    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:39.917117    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:40.461846    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:40.462411    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:40.462796    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:40.463857    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:40.469209    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:40.473435    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:40.475420    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:40.478549    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:40.690459    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:40.809770    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:40.815532    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 17:45:40.858457    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:41.195120    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:41.306202    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:41.313346    1632 kapi.go:107] duration metric: took 1m26.508459s to wait for kubernetes.io/minikube-addons=registry ...
	I0415 17:45:41.369946    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:41.687158    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:41.799653    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:41.865074    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:42.195853    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:42.307917    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:42.357967    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:42.684856    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:42.812430    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:42.866171    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:43.250875    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:43.300626    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:43.362877    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:43.691623    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:43.803034    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:43.869071    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:44.241239    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:44.364486    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:44.439963    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:44.710101    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:44.802935    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:44.869856    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:45.182447    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:45.311517    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:45.359980    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:45.688855    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:45.802710    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:45.876231    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:46.181878    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:46.311925    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:46.359585    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:46.688009    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:46.800658    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:46.866586    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:47.194079    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:47.308345    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:47.359005    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:47.686987    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:48.388503    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:48.392352    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:48.395079    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:48.402872    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:48.405180    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:48.688615    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:48.801843    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:48.877172    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:49.193132    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:49.308895    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:49.356324    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:49.683699    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:49.811544    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:49.862309    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:50.190438    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:50.301040    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:50.366230    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:50.691942    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:50.804553    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:50.853301    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:51.181780    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:51.314756    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:51.361087    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:51.689846    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:51.801827    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:51.865600    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:52.374832    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:52.374999    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:52.379168    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:52.681779    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:52.809309    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:52.858456    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:53.195849    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:53.311348    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:53.352516    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:53.687484    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:53.806923    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:53.872256    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:54.181203    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:54.308243    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:54.359543    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:54.686602    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:54.800532    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:54.867983    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:55.195224    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:55.309363    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:55.358150    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:55.686611    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:55.813451    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:55.864318    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:56.193737    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:56.309443    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:56.355092    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:56.688884    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:56.802020    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:56.864361    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:57.193195    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:57.305878    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:57.353438    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:57.684918    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:57.812064    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:57.860105    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:58.191685    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:58.303357    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:58.354289    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:58.689094    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:58.798514    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:58.861804    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:59.194382    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:59.307390    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:59.354373    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:45:59.687430    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:45:59.798941    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:45:59.863389    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:00.182261    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:00.311620    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:00.359189    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:00.690231    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:00.805227    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:00.853050    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:01.186559    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:01.314552    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:01.362305    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:01.689119    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:03.059769    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:03.060493    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:03.060634    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:03.261665    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:03.261712    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:03.263527    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:03.267130    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:03.298429    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:03.411136    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:03.697223    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:03.836895    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:03.880302    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:04.185886    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:04.315711    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:04.364682    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:04.693490    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:04.809692    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:04.862798    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:05.181805    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:05.307177    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:05.357051    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:05.689275    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:05.799129    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:05.862083    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:06.193874    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:06.864097    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:06.865701    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:06.865701    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:06.872123    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:06.872944    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:07.195599    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:07.311176    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:07.364350    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:07.694158    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:07.806614    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:07.853833    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:08.180866    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:08.314810    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:08.360823    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:08.689779    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:08.802590    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:08.867325    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:09.307248    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:09.317608    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:09.360594    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:09.692455    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:09.807070    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:09.881066    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:10.181506    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:10.309473    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:10.367287    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:10.685521    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:10.812547    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:10.864624    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:11.186128    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:11.312966    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:11.365377    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:11.692017    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:11.806689    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:11.875806    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:12.192082    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:12.304343    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:12.369580    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:12.696233    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:12.811297    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:12.872327    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:13.184124    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:13.312046    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:13.359290    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:13.695398    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:13.818058    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:13.868738    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:14.181744    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:14.311013    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:14.360573    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:14.692158    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:15.161334    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:15.168044    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:15.394787    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:15.396092    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:15.400644    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:15.687914    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:15.814765    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:15.869169    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:16.196870    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:16.316253    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:16.360554    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:16.682057    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:16.811835    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:16.869506    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:17.189466    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:17.308241    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:17.364142    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:17.694843    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:17.807154    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:17.857569    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:18.189505    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:18.305656    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:18.364658    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:18.682460    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:18.810665    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:18.859244    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:19.190136    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:19.304461    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:19.366348    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:19.704178    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:19.806665    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:19.853824    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:20.186234    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:20.311723    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:20.358345    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:20.687912    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:20.815858    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:20.860173    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:21.193024    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:21.305079    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:21.355049    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:21.691449    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:21.803970    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:21.864979    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:22.182107    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:22.311617    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:22.358730    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:22.689941    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:22.798559    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:22.866602    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:23.190524    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:23.305455    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:23.355649    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:23.686164    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:23.814556    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:23.862923    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:24.188958    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:24.301023    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:24.363735    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:24.694534    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:24.810401    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:24.857346    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:25.190615    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:25.312797    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:25.360759    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:25.690851    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:25.803442    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:25.870437    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:26.195220    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:26.306149    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:26.382774    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:26.682670    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:26.811481    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:26.859200    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:27.196922    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:27.308981    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:27.355108    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:27.687340    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:27.813657    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:27.860736    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:28.194441    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:28.306610    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:28.355786    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:28.688603    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:28.810869    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:28.866378    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:29.190869    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:29.299826    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:29.364860    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:29.691016    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:29.804261    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:29.867658    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:30.193467    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:30.309098    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:30.355026    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:30.683615    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:30.812127    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:30.860244    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:31.192642    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:31.304010    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:31.354078    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:31.684607    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:31.812476    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:31.861271    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:32.388307    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:32.388843    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:32.390528    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 17:46:32.707127    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:32.811813    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:32.867926    1632 kapi.go:107] duration metric: took 2m15.5262547s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0415 17:46:33.182785    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:33.311967    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:33.687108    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:33.798455    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:34.194524    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:34.308028    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:34.688870    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:34.802410    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:35.182076    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:35.309668    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:35.754542    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:35.810807    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:36.242988    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:36.310968    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:36.693831    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:36.804255    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:37.183106    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:37.311978    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:37.693942    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:37.805763    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:38.191002    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:38.298364    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:38.693724    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:38.807463    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:39.187266    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:39.300616    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:39.681732    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:39.812485    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:40.193181    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:40.307014    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:40.687881    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:40.799353    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:41.192495    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:41.305977    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:41.691128    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:41.802254    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:42.195280    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:42.307772    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:42.689387    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:42.801142    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:43.197751    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:43.309907    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:43.692237    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:44.013690    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:44.192882    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:44.303850    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:44.681302    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:44.818506    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:45.190222    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:45.301418    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:45.682375    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:45.809690    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:46.298831    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:46.308373    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:46.694799    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:46.799649    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:47.191787    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:47.304768    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:48.002777    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:48.006798    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:48.180601    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:48.307066    1632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 17:46:48.701275    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:48.808176    1632 kapi.go:107] duration metric: took 2m34.0241718s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0415 17:46:49.187155    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:49.691144    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:50.208458    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:50.689671    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:51.196217    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:51.687460    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:52.502702    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:52.681450    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 17:46:53.189187    1632 kapi.go:107] duration metric: took 2m33.5141599s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0415 17:46:53.191772    1632 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-961400 cluster.
	I0415 17:46:53.194259    1632 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0415 17:46:53.198289    1632 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0415 17:46:53.206293    1632 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, helm-tiller, metrics-server, inspektor-gadget, ingress-dns, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0415 17:46:53.209174    1632 addons.go:505] duration metric: took 3m14.3377539s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner helm-tiller metrics-server inspektor-gadget ingress-dns yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0415 17:46:53.209465    1632 start.go:245] waiting for cluster config update ...
	I0415 17:46:53.209465    1632 start.go:254] writing updated cluster config ...
	I0415 17:46:53.224732    1632 ssh_runner.go:195] Run: rm -f paused
	I0415 17:46:53.459966    1632 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0415 17:46:53.463184    1632 out.go:177] * Done! kubectl is now configured to use "addons-961400" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 15 17:47:38 addons-961400 dockerd[1329]: time="2024-04-15T17:47:38.885245420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 17:47:38 addons-961400 dockerd[1329]: time="2024-04-15T17:47:38.885399821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 17:47:38 addons-961400 dockerd[1329]: time="2024-04-15T17:47:38.886403425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 17:47:39 addons-961400 dockerd[1322]: time="2024-04-15T17:47:39.038633145Z" level=info msg="ignoring event" container=c2041382c3da591d66164287acafce612e6c05035574547bc35dc19fbb68033b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 15 17:47:39 addons-961400 dockerd[1329]: time="2024-04-15T17:47:39.040246953Z" level=info msg="shim disconnected" id=c2041382c3da591d66164287acafce612e6c05035574547bc35dc19fbb68033b namespace=moby
	Apr 15 17:47:39 addons-961400 dockerd[1329]: time="2024-04-15T17:47:39.040371653Z" level=warning msg="cleaning up after shim disconnected" id=c2041382c3da591d66164287acafce612e6c05035574547bc35dc19fbb68033b namespace=moby
	Apr 15 17:47:39 addons-961400 dockerd[1329]: time="2024-04-15T17:47:39.040428453Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 15 17:47:41 addons-961400 dockerd[1322]: time="2024-04-15T17:47:41.236676732Z" level=info msg="ignoring event" container=05f1785a498a24a047fe9a3481a2e1909151f365f9c7bb541d3a223f3f719983 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 15 17:47:41 addons-961400 dockerd[1329]: time="2024-04-15T17:47:41.236930733Z" level=info msg="shim disconnected" id=05f1785a498a24a047fe9a3481a2e1909151f365f9c7bb541d3a223f3f719983 namespace=moby
	Apr 15 17:47:41 addons-961400 dockerd[1329]: time="2024-04-15T17:47:41.237097034Z" level=warning msg="cleaning up after shim disconnected" id=05f1785a498a24a047fe9a3481a2e1909151f365f9c7bb541d3a223f3f719983 namespace=moby
	Apr 15 17:47:41 addons-961400 dockerd[1329]: time="2024-04-15T17:47:41.237166334Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 15 17:47:49 addons-961400 dockerd[1322]: time="2024-04-15T17:47:49.508800865Z" level=info msg="ignoring event" container=e13036f997fe5f2130e6507cc48b7fe5002f7be9ca79127cf34ef350a5a6071c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 15 17:47:49 addons-961400 dockerd[1329]: time="2024-04-15T17:47:49.510761777Z" level=info msg="shim disconnected" id=e13036f997fe5f2130e6507cc48b7fe5002f7be9ca79127cf34ef350a5a6071c namespace=moby
	Apr 15 17:47:49 addons-961400 dockerd[1329]: time="2024-04-15T17:47:49.510842878Z" level=warning msg="cleaning up after shim disconnected" id=e13036f997fe5f2130e6507cc48b7fe5002f7be9ca79127cf34ef350a5a6071c namespace=moby
	Apr 15 17:47:49 addons-961400 dockerd[1329]: time="2024-04-15T17:47:49.510860778Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 15 17:47:49 addons-961400 dockerd[1329]: time="2024-04-15T17:47:49.576948403Z" level=warning msg="cleanup warnings time=\"2024-04-15T17:47:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 15 17:47:49 addons-961400 dockerd[1329]: time="2024-04-15T17:47:49.893279838Z" level=info msg="shim disconnected" id=e1bceeddcbc57648ffe85d434e12ed5c27233ed8db51300e3ba45656a641b1fb namespace=moby
	Apr 15 17:47:49 addons-961400 dockerd[1329]: time="2024-04-15T17:47:49.893448139Z" level=warning msg="cleaning up after shim disconnected" id=e1bceeddcbc57648ffe85d434e12ed5c27233ed8db51300e3ba45656a641b1fb namespace=moby
	Apr 15 17:47:49 addons-961400 dockerd[1329]: time="2024-04-15T17:47:49.893469439Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 15 17:47:49 addons-961400 dockerd[1322]: time="2024-04-15T17:47:49.896667859Z" level=info msg="ignoring event" container=e1bceeddcbc57648ffe85d434e12ed5c27233ed8db51300e3ba45656a641b1fb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 15 17:47:51 addons-961400 dockerd[1329]: time="2024-04-15T17:47:51.248633530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 15 17:47:51 addons-961400 dockerd[1329]: time="2024-04-15T17:47:51.250607037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 17:47:51 addons-961400 dockerd[1329]: time="2024-04-15T17:47:51.250658037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 17:47:51 addons-961400 dockerd[1329]: time="2024-04-15T17:47:51.250859337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 17:47:51 addons-961400 cri-dockerd[1228]: time="2024-04-15T17:47:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5b55bfb601b3f8b93534765eb2f69caa5864ed1f9651d66de0f2bf5daedb64cc/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	c2041382c3da5       busybox@sha256:c3839dd800b9eb7603340509769c43e146a74c63dca3045a8e7dc8ee07e53966                                                              13 seconds ago       Exited              busybox                                  0                   05f1785a498a2       test-local-path
	633721c8cef19       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                                              20 seconds ago       Exited              helper-pod                               0                   6dd9632ea1891       helper-pod-create-pvc-b043ef5f-899c-46dc-bf5d-2436443ceed8
	4f394692668e4       alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f                                                          21 seconds ago       Exited              helm-test                                0                   a33abf5d90186       helm-test
	4395df84a0eee       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 59 seconds ago       Running             gcp-auth                                 0                   e9ce5bab38379       gcp-auth-7d69788767-d46cz
	770f8990fe3aa       registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c                             About a minute ago   Running             controller                               0                   a472077ab06e3       ingress-nginx-controller-65496f9567-vmzf6
	7b9cf5d248d00       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   7e7fb01cbb44d       csi-hostpathplugin-bvh7t
	2f855116e154d       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          About a minute ago   Running             csi-provisioner                          0                   7e7fb01cbb44d       csi-hostpathplugin-bvh7t
	8e6686dd0dcba       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            About a minute ago   Running             liveness-probe                           0                   7e7fb01cbb44d       csi-hostpathplugin-bvh7t
	eecdd3899f161       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           About a minute ago   Running             hostpath                                 0                   7e7fb01cbb44d       csi-hostpathplugin-bvh7t
	73cd8eaa02f69       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                About a minute ago   Running             node-driver-registrar                    0                   7e7fb01cbb44d       csi-hostpathplugin-bvh7t
	3710ea374ff96       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   About a minute ago   Running             csi-external-health-monitor-controller   0                   7e7fb01cbb44d       csi-hostpathplugin-bvh7t
	7a5cfec551ffe       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              About a minute ago   Running             csi-resizer                              0                   fb9614547372f       csi-hostpath-resizer-0
	7fbbc8b5433c5       b29d748098e32                                                                                                                                About a minute ago   Exited              patch                                    2                   5c92f504d6e1e       ingress-nginx-admission-patch-jhsbq
	0e0c59295f714       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             About a minute ago   Running             csi-attacher                             0                   4deb333f67732       csi-hostpath-attacher-0
	3e9433c57a36a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:44d1d0e9f19c63f58b380c5fddaca7cf22c7cee564adeff365225a5df5ef3334                   About a minute ago   Exited              create                                   0                   f999560c6f034       ingress-nginx-admission-create-52hh5
	c4e5a7b5d1759       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                                        About a minute ago   Running             yakd                                     0                   d7dfb74777e21       yakd-dashboard-9947fc6bf-p6bck
	e05be8aeef369       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   fcf001d26705d       snapshot-controller-58dbcc7b99-8jncp
	dc1aa5b0b2f6a       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   57e1d0b67283b       snapshot-controller-58dbcc7b99-r6vd8
	85b565ee4623c       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       2 minutes ago        Running             local-path-provisioner                   0                   c236edb16cc89       local-path-provisioner-78b46b4d5c-2z9sw
	fed03bace6104       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             2 minutes ago        Running             minikube-ingress-dns                     0                   04f6f019a483e       kube-ingress-dns-minikube
	87583bd9d792a       gcr.io/cloud-spanner-emulator/emulator@sha256:538fb31f832e76c93f10035cb609c56fc5cd18b3cd85a3ba50699572c3c5dc50                               3 minutes ago        Running             cloud-spanner-emulator                   0                   03c2809c709ac       cloud-spanner-emulator-5446596998-fptfb
	fa5742f836628       nvcr.io/nvidia/k8s-device-plugin@sha256:50aa9517d771e3b0ffa7fded8f1e988dba680a7ff5efce162ce31d1b5ec043e2                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   476349d6d5326       nvidia-device-plugin-daemonset-pczqg
	4b26692b7ed63       6e38f40d628db                                                                                                                                3 minutes ago        Running             storage-provisioner                      0                   1fd8f64ed2d81       storage-provisioner
	53e7d066a4a06       a1d263b5dc5b0                                                                                                                                4 minutes ago        Running             kube-proxy                               0                   faa77c1159598       kube-proxy-jpk2b
	e42390ca503c6       cbb01a7bd410d                                                                                                                                4 minutes ago        Running             coredns                                  0                   cc1df8b56ff9e       coredns-76f75df574-j9b94
	d9b84ff48f72d       39f995c9f1996                                                                                                                                4 minutes ago        Running             kube-apiserver                           0                   2408bb850e41d       kube-apiserver-addons-961400
	98f1480d5487e       8c390d98f50c0                                                                                                                                4 minutes ago        Running             kube-scheduler                           0                   807840d6dfdfd       kube-scheduler-addons-961400
	eaab30ecec225       3861cfcd7c04c                                                                                                                                4 minutes ago        Running             etcd                                     0                   2f2a0f80f56ce       etcd-addons-961400
	d92a352336691       6052a25da3f97                                                                                                                                4 minutes ago        Running             kube-controller-manager                  0                   29d807430ac25       kube-controller-manager-addons-961400
	
	
	==> controller_ingress [770f8990fe3a] <==
	I0415 17:46:48.397875       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0415 17:46:48.412184       7 main.go:249] "Running in Kubernetes cluster" major="1" minor="29" git="v1.29.3" state="clean" commit="6813625b7cd706db5bc7388921be03071e1a492d" platform="linux/amd64"
	I0415 17:46:48.914613       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0415 17:46:48.958141       7 ssl.go:536] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0415 17:46:48.972267       7 nginx.go:265] "Starting NGINX Ingress controller"
	I0415 17:46:48.997804       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"0c2767dd-5a61-4d02-9055-510180f53339", APIVersion:"v1", ResourceVersion:"723", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0415 17:46:49.011763       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"368af082-2bc7-4f41-8ada-d7630ab46c46", APIVersion:"v1", ResourceVersion:"724", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0415 17:46:49.011835       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"174ba89d-368d-4067-955a-d017133782f3", APIVersion:"v1", ResourceVersion:"725", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0415 17:46:50.174471       7 nginx.go:308] "Starting NGINX process"
	I0415 17:46:50.174850       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0415 17:46:50.174989       7 nginx.go:328] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0415 17:46:50.186930       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0415 17:46:50.241728       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0415 17:46:50.243192       7 status.go:84] "New leader elected" identity="ingress-nginx-controller-65496f9567-vmzf6"
	I0415 17:46:50.251590       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-65496f9567-vmzf6" node="addons-961400"
	I0415 17:46:50.279737       7 controller.go:210] "Backend successfully reloaded"
	I0415 17:46:50.280219       7 controller.go:221] "Initial sync, sleeping for 1 second"
	I0415 17:46:50.280465       7 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-65496f9567-vmzf6", UID:"69cd140d-3101-4950-9534-ebf01eb95eaf", APIVersion:"v1", ResourceVersion:"1284", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0415 17:47:50.178340       7 controller.go:1108] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0415 17:47:50.213831       7 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.035s renderingIngressLength:1 renderingIngressTime:0.001s admissionTime:18.1kBs testedConfigurationSize:0.036}
	I0415 17:47:50.214109       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0415 17:47:50.228679       7 store.go:440] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	I0415 17:47:50.230344       7 event.go:364] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"e59a8fab-22d4-4a27-935a-28e1d5eb294d", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1615", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I0415 17:47:50.267775       7 status.go:304] "updating Ingress status" namespace="default" ingress="nginx-ingress" currentValue=null newValue=[{"ip":"172.19.57.138"}]
	I0415 17:47:50.284317       7 event.go:364] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"e59a8fab-22d4-4a27-935a-28e1d5eb294d", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1617", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	
	
	==> coredns [e42390ca503c] <==
	[INFO] 10.244.0.9:51950 - 33694 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0002234s
	[INFO] 10.244.0.9:45940 - 27443 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000547801s
	[INFO] 10.244.0.9:45940 - 41534 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0002359s
	[INFO] 10.244.0.9:49197 - 62876 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0001935s
	[INFO] 10.244.0.9:49197 - 25506 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000629302s
	[INFO] 10.244.0.9:48956 - 22256 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000176101s
	[INFO] 10.244.0.9:48956 - 16626 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000351701s
	[INFO] 10.244.0.9:41299 - 4233 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0001553s
	[INFO] 10.244.0.9:41299 - 62581 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000495701s
	[INFO] 10.244.0.9:53855 - 50274 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0000534s
	[INFO] 10.244.0.9:53855 - 59233 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000707502s
	[INFO] 10.244.0.9:37741 - 32882 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000742s
	[INFO] 10.244.0.9:37741 - 19060 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.001076802s
	[INFO] 10.244.0.9:38018 - 64952 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000306401s
	[INFO] 10.244.0.9:38018 - 40615 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000628901s
	[INFO] 10.244.0.22:43881 - 7626 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000378201s
	[INFO] 10.244.0.22:39133 - 312 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000161701s
	[INFO] 10.244.0.22:45462 - 26199 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000228101s
	[INFO] 10.244.0.22:54484 - 51053 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000119301s
	[INFO] 10.244.0.22:35200 - 49486 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000285601s
	[INFO] 10.244.0.22:54801 - 14432 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000285002s
	[INFO] 10.244.0.22:51568 - 31529 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 230 0.001608906s
	[INFO] 10.244.0.22:51938 - 35156 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.002376209s
	[INFO] 10.244.0.23:46350 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000463703s
	[INFO] 10.244.0.23:43520 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000195301s
	
	
	==> describe nodes <==
	Name:               addons-961400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-961400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c
	                    minikube.k8s.io/name=addons-961400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_15T17_43_25_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-961400
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-961400"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 17:43:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-961400
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Apr 2024 17:47:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Apr 2024 17:47:32 +0000   Mon, 15 Apr 2024 17:43:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Apr 2024 17:47:32 +0000   Mon, 15 Apr 2024 17:43:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Apr 2024 17:47:32 +0000   Mon, 15 Apr 2024 17:43:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Apr 2024 17:47:32 +0000   Mon, 15 Apr 2024 17:43:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.57.138
	  Hostname:    addons-961400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912864Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912864Ki
	  pods:               110
	System Info:
	  Machine ID:                 77fe3b622dfb4e25869f01e85d29eb03
	  System UUID:                ccfe8e00-ab57-3843-bf41-9f5db74ec089
	  Boot ID:                    1312a261-9f6a-47ff-95e5-554dfc915044
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5446596998-fptfb      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  gcp-auth                    gcp-auth-7d69788767-d46cz                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  ingress-nginx               ingress-nginx-controller-65496f9567-vmzf6    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         3m38s
	  kube-system                 coredns-76f75df574-j9b94                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m14s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 csi-hostpathplugin-bvh7t                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 etcd-addons-961400                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m27s
	  kube-system                 kube-apiserver-addons-961400                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 kube-controller-manager-addons-961400        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 kube-proxy-jpk2b                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  kube-system                 kube-scheduler-addons-961400                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 nvidia-device-plugin-daemonset-pczqg         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  kube-system                 snapshot-controller-58dbcc7b99-8jncp         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 snapshot-controller-58dbcc7b99-r6vd8         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  local-path-storage          local-path-provisioner-78b46b4d5c-2z9sw      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-p6bck               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     3m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (10%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m58s                  kube-proxy       
	  Normal  Starting                 4m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m36s (x8 over 4m36s)  kubelet          Node addons-961400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m36s (x8 over 4m36s)  kubelet          Node addons-961400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m36s (x7 over 4m36s)  kubelet          Node addons-961400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m27s                  kubelet          Node addons-961400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m27s                  kubelet          Node addons-961400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m27s                  kubelet          Node addons-961400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m24s                  kubelet          Node addons-961400 status is now: NodeReady
	  Normal  RegisteredNode           4m14s                  node-controller  Node addons-961400 event: Registered Node addons-961400 in Controller
	
	
	==> dmesg <==
	[  +5.474521] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.977697] kauditd_printk_skb: 23 callbacks suppressed
	[Apr15 17:44] kauditd_printk_skb: 31 callbacks suppressed
	[  +6.077159] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.050208] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.021466] kauditd_printk_skb: 135 callbacks suppressed
	[ +10.665135] kauditd_printk_skb: 56 callbacks suppressed
	[ +14.253050] kauditd_printk_skb: 2 callbacks suppressed
	[Apr15 17:45] kauditd_printk_skb: 2 callbacks suppressed
	[ +14.257785] kauditd_printk_skb: 24 callbacks suppressed
	[  +3.575904] hrtimer: interrupt took 1345102 ns
	[  +9.610193] kauditd_printk_skb: 6 callbacks suppressed
	[Apr15 17:46] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.714967] kauditd_printk_skb: 8 callbacks suppressed
	[  +7.766690] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.810770] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.597629] kauditd_printk_skb: 20 callbacks suppressed
	[ +11.627384] kauditd_printk_skb: 2 callbacks suppressed
	[Apr15 17:47] kauditd_printk_skb: 40 callbacks suppressed
	[  +6.584449] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.698150] kauditd_printk_skb: 9 callbacks suppressed
	[  +8.343238] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.130098] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.672643] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.723246] kauditd_printk_skb: 16 callbacks suppressed
	
	
	==> etcd [eaab30ecec22] <==
	{"level":"info","ts":"2024-04-15T17:46:52.516367Z","caller":"traceutil/trace.go:171","msg":"trace[1842459336] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1312; }","duration":"407.667781ms","start":"2024-04-15T17:46:52.108688Z","end":"2024-04-15T17:46:52.516356Z","steps":["trace[1842459336] 'range keys from in-memory index tree'  (duration: 405.579873ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T17:46:52.516541Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T17:46:52.108674Z","time spent":"407.855882ms","remote":"127.0.0.1:53536","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1136,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"info","ts":"2024-04-15T17:47:18.601522Z","caller":"traceutil/trace.go:171","msg":"trace[855021749] transaction","detail":"{read_only:false; response_revision:1434; number_of_response:1; }","duration":"160.956676ms","start":"2024-04-15T17:47:18.440485Z","end":"2024-04-15T17:47:18.601442Z","steps":["trace[855021749] 'process raft request'  (duration: 160.623274ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T17:47:23.364276Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.749589ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7259959704312139629 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-b043ef5f-899c-46dc-bf5d-2436443ceed8\" mod_revision:1480 > success:<request_put:<key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-b043ef5f-899c-46dc-bf5d-2436443ceed8\" value_size:4089 >> failure:<request_range:<key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-b043ef5f-899c-46dc-bf5d-2436443ceed8\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-15T17:47:23.364351Z","caller":"traceutil/trace.go:171","msg":"trace[1344791038] linearizableReadLoop","detail":"{readStateIndex:1553; appliedIndex:1552; }","duration":"148.066701ms","start":"2024-04-15T17:47:23.216272Z","end":"2024-04-15T17:47:23.364339Z","steps":["trace[1344791038] 'read index received'  (duration: 3.040112ms)","trace[1344791038] 'applied index is now lower than readState.Index'  (duration: 145.025889ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-15T17:47:23.36477Z","caller":"traceutil/trace.go:171","msg":"trace[2029479175] transaction","detail":"{read_only:false; response_revision:1482; number_of_response:1; }","duration":"162.900862ms","start":"2024-04-15T17:47:23.201803Z","end":"2024-04-15T17:47:23.364704Z","steps":["trace[2029479175] 'process raft request'  (duration: 17.649471ms)","trace[2029479175] 'compare'  (duration: 144.175786ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-15T17:47:23.365806Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.59807ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-b043ef5f-899c-46dc-bf5d-2436443ceed8\" ","response":"range_response_count:1 size:4204"}
	{"level":"info","ts":"2024-04-15T17:47:23.365838Z","caller":"traceutil/trace.go:171","msg":"trace[1695691555] range","detail":"{range_begin:/registry/pods/local-path-storage/helper-pod-create-pvc-b043ef5f-899c-46dc-bf5d-2436443ceed8; range_end:; response_count:1; response_revision:1482; }","duration":"115.70837ms","start":"2024-04-15T17:47:23.250122Z","end":"2024-04-15T17:47:23.36583Z","steps":["trace[1695691555] 'agreement among raft nodes before linearized reading'  (duration: 115.65477ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T17:47:23.364987Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.696504ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/test-pvc\" ","response":"range_response_count:1 size:1412"}
	{"level":"info","ts":"2024-04-15T17:47:23.367063Z","caller":"traceutil/trace.go:171","msg":"trace[1746532961] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/test-pvc; range_end:; response_count:1; response_revision:1482; }","duration":"150.798713ms","start":"2024-04-15T17:47:23.216239Z","end":"2024-04-15T17:47:23.367038Z","steps":["trace[1746532961] 'agreement among raft nodes before linearized reading'  (duration: 148.660305ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T17:47:23.958708Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"423.555322ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-15T17:47:23.958771Z","caller":"traceutil/trace.go:171","msg":"trace[1377086101] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1482; }","duration":"423.676722ms","start":"2024-04-15T17:47:23.53508Z","end":"2024-04-15T17:47:23.958757Z","steps":["trace[1377086101] 'range keys from in-memory index tree'  (duration: 423.402121ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T17:47:23.9588Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T17:47:23.535053Z","time spent":"423.739822ms","remote":"127.0.0.1:53384","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-04-15T17:47:23.959087Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"388.655679ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/test-pvc.17c68545c9795253\" ","response":"range_response_count:1 size:901"}
	{"level":"info","ts":"2024-04-15T17:47:23.959113Z","caller":"traceutil/trace.go:171","msg":"trace[1219773125] range","detail":"{range_begin:/registry/events/default/test-pvc.17c68545c9795253; range_end:; response_count:1; response_revision:1482; }","duration":"388.709379ms","start":"2024-04-15T17:47:23.570397Z","end":"2024-04-15T17:47:23.959106Z","steps":["trace[1219773125] 'range keys from in-memory index tree'  (duration: 388.461579ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T17:47:23.959132Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T17:47:23.570381Z","time spent":"388.745679ms","remote":"127.0.0.1:53440","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":924,"request content":"key:\"/registry/events/default/test-pvc.17c68545c9795253\" "}
	{"level":"warn","ts":"2024-04-15T17:47:23.959518Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"300.30792ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:8977"}
	{"level":"info","ts":"2024-04-15T17:47:23.959545Z","caller":"traceutil/trace.go:171","msg":"trace[857224955] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:1482; }","duration":"300.364421ms","start":"2024-04-15T17:47:23.659173Z","end":"2024-04-15T17:47:23.959538Z","steps":["trace[857224955] 'range keys from in-memory index tree'  (duration: 300.07662ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T17:47:23.959566Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T17:47:23.659146Z","time spent":"300.412221ms","remote":"127.0.0.1:53554","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":3,"response size":9000,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2024-04-15T17:47:23.959723Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.546316ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/172.19.57.138\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-04-15T17:47:23.959747Z","caller":"traceutil/trace.go:171","msg":"trace[222299087] range","detail":"{range_begin:/registry/masterleases/172.19.57.138; range_end:; response_count:1; response_revision:1482; }","duration":"151.597216ms","start":"2024-04-15T17:47:23.808143Z","end":"2024-04-15T17:47:23.95974Z","steps":["trace[222299087] 'range keys from in-memory index tree'  (duration: 151.441616ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T17:47:26.759982Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.096253ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:8977"}
	{"level":"info","ts":"2024-04-15T17:47:26.760175Z","caller":"traceutil/trace.go:171","msg":"trace[461747344] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:1490; }","duration":"101.317854ms","start":"2024-04-15T17:47:26.658843Z","end":"2024-04-15T17:47:26.76016Z","steps":["trace[461747344] 'range keys from in-memory index tree'  (duration: 100.954452ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T17:47:29.178454Z","caller":"traceutil/trace.go:171","msg":"trace[824298436] transaction","detail":"{read_only:false; response_revision:1507; number_of_response:1; }","duration":"222.080995ms","start":"2024-04-15T17:47:28.956343Z","end":"2024-04-15T17:47:29.178424Z","steps":["trace[824298436] 'process raft request'  (duration: 221.862694ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T17:47:29.239037Z","caller":"traceutil/trace.go:171","msg":"trace[38434409] transaction","detail":"{read_only:false; response_revision:1508; number_of_response:1; }","duration":"247.647409ms","start":"2024-04-15T17:47:28.991322Z","end":"2024-04-15T17:47:29.23897Z","steps":["trace[38434409] 'process raft request'  (duration: 195.218974ms)","trace[38434409] 'compare'  (duration: 51.924933ms)"],"step_count":2}
	
	
	==> gcp-auth [4395df84a0ee] <==
	2024/04/15 17:46:52 GCP Auth Webhook started!
	2024/04/15 17:47:03 Ready to marshal response ...
	2024/04/15 17:47:03 Ready to write response ...
	2024/04/15 17:47:10 Ready to marshal response ...
	2024/04/15 17:47:10 Ready to write response ...
	2024/04/15 17:47:22 Ready to marshal response ...
	2024/04/15 17:47:22 Ready to write response ...
	2024/04/15 17:47:22 Ready to marshal response ...
	2024/04/15 17:47:22 Ready to write response ...
	2024/04/15 17:47:23 Ready to marshal response ...
	2024/04/15 17:47:23 Ready to write response ...
	2024/04/15 17:47:50 Ready to marshal response ...
	2024/04/15 17:47:50 Ready to write response ...
	
	
	==> kernel <==
	 17:47:52 up 6 min,  0 users,  load average: 4.31, 3.01, 1.37
	Linux addons-961400 5.10.207 #1 SMP Mon Apr 15 15:01:07 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d9b84ff48f72] <==
	I0415 17:45:48.409930       1 trace.go:236] Trace[515879543]: "List" accept:application/json, */*,audit-id:c952c174-848a-4c75-8e06-975bc6080450,client:172.19.48.1,api-group:,api-version:v1,name:,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/kube-system/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (15-Apr-2024 17:45:47.875) (total time: 534ms):
	Trace[515879543]: ["List(recursive=true) etcd3" audit-id:c952c174-848a-4c75-8e06-975bc6080450,key:/pods/kube-system,resourceVersion:,resourceVersionMatch:,limit:0,continue: 534ms (17:45:47.875)]
	Trace[515879543]: [534.483099ms] [534.483099ms] END
	I0415 17:46:03.072738       1 trace.go:236] Trace[1758976408]: "List" accept:application/json, */*,audit-id:33d0311b-73c2-4f38-81cf-6ed9c15e725c,client:172.19.48.1,api-group:,api-version:v1,name:,subresource:,namespace:gcp-auth,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/gcp-auth/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (15-Apr-2024 17:46:02.195) (total time: 876ms):
	Trace[1758976408]: ["List(recursive=true) etcd3" audit-id:33d0311b-73c2-4f38-81cf-6ed9c15e725c,key:/pods/gcp-auth,resourceVersion:,resourceVersionMatch:,limit:0,continue: 876ms (17:46:02.195)]
	Trace[1758976408]: [876.830153ms] [876.830153ms] END
	I0415 17:46:03.073926       1 trace.go:236] Trace[307314364]: "List" accept:application/json, */*,audit-id:9f74afe0-ff20-4056-bd47-2d1327ac6ac1,client:172.19.48.1,api-group:,api-version:v1,name:,subresource:,namespace:ingress-nginx,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/ingress-nginx/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (15-Apr-2024 17:46:01.811) (total time: 1262ms):
	Trace[307314364]: ["List(recursive=true) etcd3" audit-id:9f74afe0-ff20-4056-bd47-2d1327ac6ac1,key:/pods/ingress-nginx,resourceVersion:,resourceVersionMatch:,limit:0,continue: 1261ms (17:46:01.812)]
	Trace[307314364]: [1.262024967s] [1.262024967s] END
	I0415 17:46:03.074169       1 trace.go:236] Trace[1974194850]: "List" accept:application/json, */*,audit-id:dfa1d6fd-e114-4095-b641-d48afc84ef08,client:172.19.48.1,api-group:,api-version:v1,name:,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/kube-system/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (15-Apr-2024 17:46:01.875) (total time: 1198ms):
	Trace[1974194850]: ["List(recursive=true) etcd3" audit-id:dfa1d6fd-e114-4095-b641-d48afc84ef08,key:/pods/kube-system,resourceVersion:,resourceVersionMatch:,limit:0,continue: 1198ms (17:46:01.875)]
	Trace[1974194850]: [1.198774533s] [1.198774533s] END
	I0415 17:46:06.877619       1 trace.go:236] Trace[824038458]: "List" accept:application/json, */*,audit-id:b1308063-f4dc-4424-8a20-51abed08d135,client:172.19.48.1,api-group:,api-version:v1,name:,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/kube-system/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (15-Apr-2024 17:46:06.367) (total time: 509ms):
	Trace[824038458]: ["List(recursive=true) etcd3" audit-id:b1308063-f4dc-4424-8a20-51abed08d135,key:/pods/kube-system,resourceVersion:,resourceVersionMatch:,limit:0,continue: 509ms (17:46:06.367)]
	Trace[824038458]: [509.822476ms] [509.822476ms] END
	I0415 17:46:06.879290       1 trace.go:236] Trace[1998183128]: "List" accept:application/json, */*,audit-id:190e1037-acbd-4254-89d8-45e2c2c9709f,client:172.19.48.1,api-group:,api-version:v1,name:,subresource:,namespace:ingress-nginx,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/ingress-nginx/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (15-Apr-2024 17:46:06.319) (total time: 559ms):
	Trace[1998183128]: ["List(recursive=true) etcd3" audit-id:190e1037-acbd-4254-89d8-45e2c2c9709f,key:/pods/ingress-nginx,resourceVersion:,resourceVersionMatch:,limit:0,continue: 559ms (17:46:06.319)]
	Trace[1998183128]: [559.47268ms] [559.47268ms] END
	I0415 17:47:16.599906       1 handler.go:275] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0415 17:47:17.707925       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0415 17:47:31.147346       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 172.19.57.138:8443->10.244.0.25:54538: read: connection reset by peer
	I0415 17:47:33.876535       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0415 17:47:50.215506       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0415 17:47:50.806095       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.56.160"}
	I0415 17:47:51.913217       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [d92a35233669] <==
	I0415 17:47:01.973736       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="237.101µs"
	I0415 17:47:08.567942       1 event.go:376] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0415 17:47:09.522170       1 event.go:376] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0415 17:47:16.098785       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-75d6c48ddd" duration="4.6µs"
	E0415 17:47:17.710070       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W0415 17:47:19.123310       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0415 17:47:19.123350       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0415 17:47:21.910225       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0415 17:47:21.910296       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0415 17:47:22.577155       1 event.go:376] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0415 17:47:23.040770       1 event.go:376] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0415 17:47:23.568765       1 event.go:376] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	W0415 17:47:26.390212       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0415 17:47:26.390332       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0415 17:47:26.914636       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	I0415 17:47:28.485901       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="10.6µs"
	W0415 17:47:36.362082       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0415 17:47:36.362351       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0415 17:47:37.576737       1 event.go:376] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0415 17:47:38.568374       1 event.go:376] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0415 17:47:38.711461       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0415 17:47:38.711503       1 shared_informer.go:318] Caches are synced for resource quota
	I0415 17:47:39.201244       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0415 17:47:39.201381       1 shared_informer.go:318] Caches are synced for garbage collector
	I0415 17:47:49.372425       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/tiller-deploy-7b677967b9" duration="6.3µs"
	
	
	==> kube-proxy [53e7d066a4a0] <==
	I0415 17:43:53.165962       1 server_others.go:72] "Using iptables proxy"
	I0415 17:43:53.388629       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.57.138"]
	I0415 17:43:53.779585       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0415 17:43:53.779709       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0415 17:43:53.779737       1 server_others.go:168] "Using iptables Proxier"
	I0415 17:43:53.793591       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0415 17:43:53.794873       1 server.go:865] "Version info" version="v1.29.3"
	I0415 17:43:53.794910       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0415 17:43:53.797727       1 config.go:188] "Starting service config controller"
	I0415 17:43:53.797884       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0415 17:43:53.797936       1 config.go:97] "Starting endpoint slice config controller"
	I0415 17:43:53.797948       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0415 17:43:53.799366       1 config.go:315] "Starting node config controller"
	I0415 17:43:53.799508       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0415 17:43:53.906594       1 shared_informer.go:318] Caches are synced for node config
	I0415 17:43:53.906646       1 shared_informer.go:318] Caches are synced for service config
	I0415 17:43:53.906753       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [98f1480d5487] <==
	W0415 17:43:22.564144       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0415 17:43:22.565229       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0415 17:43:22.588146       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0415 17:43:22.588355       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0415 17:43:22.603774       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0415 17:43:22.603819       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0415 17:43:22.610700       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0415 17:43:22.610746       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0415 17:43:22.626696       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0415 17:43:22.626943       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0415 17:43:22.654052       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0415 17:43:22.654176       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0415 17:43:22.719075       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0415 17:43:22.719245       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0415 17:43:22.831863       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0415 17:43:22.831951       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0415 17:43:22.910096       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0415 17:43:22.910196       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0415 17:43:23.065902       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0415 17:43:23.066060       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0415 17:43:23.086231       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0415 17:43:23.086285       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0415 17:43:23.148516       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0415 17:43:23.148688       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0415 17:43:25.309042       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 15 17:47:41 addons-961400 kubelet[2119]: I0415 17:47:41.497393    2119 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw8g8\" (UniqueName: \"kubernetes.io/projected/cc68f864-35da-400f-ab57-41f570e2b67d-kube-api-access-gw8g8\") pod \"cc68f864-35da-400f-ab57-41f570e2b67d\" (UID: \"cc68f864-35da-400f-ab57-41f570e2b67d\") "
	Apr 15 17:47:41 addons-961400 kubelet[2119]: I0415 17:47:41.497509    2119 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/cc68f864-35da-400f-ab57-41f570e2b67d-pvc-b043ef5f-899c-46dc-bf5d-2436443ceed8\") pod \"cc68f864-35da-400f-ab57-41f570e2b67d\" (UID: \"cc68f864-35da-400f-ab57-41f570e2b67d\") "
	Apr 15 17:47:41 addons-961400 kubelet[2119]: I0415 17:47:41.497599    2119 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc68f864-35da-400f-ab57-41f570e2b67d-pvc-b043ef5f-899c-46dc-bf5d-2436443ceed8" (OuterVolumeSpecName: "data") pod "cc68f864-35da-400f-ab57-41f570e2b67d" (UID: "cc68f864-35da-400f-ab57-41f570e2b67d"). InnerVolumeSpecName "pvc-b043ef5f-899c-46dc-bf5d-2436443ceed8". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Apr 15 17:47:41 addons-961400 kubelet[2119]: I0415 17:47:41.497634    2119 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc68f864-35da-400f-ab57-41f570e2b67d-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "cc68f864-35da-400f-ab57-41f570e2b67d" (UID: "cc68f864-35da-400f-ab57-41f570e2b67d"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Apr 15 17:47:41 addons-961400 kubelet[2119]: I0415 17:47:41.500344    2119 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc68f864-35da-400f-ab57-41f570e2b67d-kube-api-access-gw8g8" (OuterVolumeSpecName: "kube-api-access-gw8g8") pod "cc68f864-35da-400f-ab57-41f570e2b67d" (UID: "cc68f864-35da-400f-ab57-41f570e2b67d"). InnerVolumeSpecName "kube-api-access-gw8g8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 15 17:47:41 addons-961400 kubelet[2119]: I0415 17:47:41.598084    2119 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/cc68f864-35da-400f-ab57-41f570e2b67d-gcp-creds\") on node \"addons-961400\" DevicePath \"\""
	Apr 15 17:47:41 addons-961400 kubelet[2119]: I0415 17:47:41.598193    2119 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gw8g8\" (UniqueName: \"kubernetes.io/projected/cc68f864-35da-400f-ab57-41f570e2b67d-kube-api-access-gw8g8\") on node \"addons-961400\" DevicePath \"\""
	Apr 15 17:47:41 addons-961400 kubelet[2119]: I0415 17:47:41.598215    2119 reconciler_common.go:300] "Volume detached for volume \"pvc-b043ef5f-899c-46dc-bf5d-2436443ceed8\" (UniqueName: \"kubernetes.io/host-path/cc68f864-35da-400f-ab57-41f570e2b67d-pvc-b043ef5f-899c-46dc-bf5d-2436443ceed8\") on node \"addons-961400\" DevicePath \"\""
	Apr 15 17:47:42 addons-961400 kubelet[2119]: I0415 17:47:42.149447    2119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05f1785a498a24a047fe9a3481a2e1909151f365f9c7bb541d3a223f3f719983"
	Apr 15 17:47:50 addons-961400 kubelet[2119]: I0415 17:47:50.181527    2119 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdwzk\" (UniqueName: \"kubernetes.io/projected/f5730ee6-1646-4c69-a454-1c22681d47f0-kube-api-access-fdwzk\") pod \"f5730ee6-1646-4c69-a454-1c22681d47f0\" (UID: \"f5730ee6-1646-4c69-a454-1c22681d47f0\") "
	Apr 15 17:47:50 addons-961400 kubelet[2119]: I0415 17:47:50.184365    2119 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5730ee6-1646-4c69-a454-1c22681d47f0-kube-api-access-fdwzk" (OuterVolumeSpecName: "kube-api-access-fdwzk") pod "f5730ee6-1646-4c69-a454-1c22681d47f0" (UID: "f5730ee6-1646-4c69-a454-1c22681d47f0"). InnerVolumeSpecName "kube-api-access-fdwzk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 15 17:47:50 addons-961400 kubelet[2119]: I0415 17:47:50.281838    2119 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fdwzk\" (UniqueName: \"kubernetes.io/projected/f5730ee6-1646-4c69-a454-1c22681d47f0-kube-api-access-fdwzk\") on node \"addons-961400\" DevicePath \"\""
	Apr 15 17:47:50 addons-961400 kubelet[2119]: I0415 17:47:50.429529    2119 scope.go:117] "RemoveContainer" containerID="e13036f997fe5f2130e6507cc48b7fe5002f7be9ca79127cf34ef350a5a6071c"
	Apr 15 17:47:50 addons-961400 kubelet[2119]: I0415 17:47:50.483390    2119 scope.go:117] "RemoveContainer" containerID="e13036f997fe5f2130e6507cc48b7fe5002f7be9ca79127cf34ef350a5a6071c"
	Apr 15 17:47:50 addons-961400 kubelet[2119]: E0415 17:47:50.487724    2119 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: e13036f997fe5f2130e6507cc48b7fe5002f7be9ca79127cf34ef350a5a6071c" containerID="e13036f997fe5f2130e6507cc48b7fe5002f7be9ca79127cf34ef350a5a6071c"
	Apr 15 17:47:50 addons-961400 kubelet[2119]: I0415 17:47:50.487789    2119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"e13036f997fe5f2130e6507cc48b7fe5002f7be9ca79127cf34ef350a5a6071c"} err="failed to get container status \"e13036f997fe5f2130e6507cc48b7fe5002f7be9ca79127cf34ef350a5a6071c\": rpc error: code = Unknown desc = Error response from daemon: No such container: e13036f997fe5f2130e6507cc48b7fe5002f7be9ca79127cf34ef350a5a6071c"
	Apr 15 17:47:50 addons-961400 kubelet[2119]: I0415 17:47:50.706892    2119 topology_manager.go:215] "Topology Admit Handler" podUID="6e4d2296-90df-40fb-a92c-7c5f725e78c4" podNamespace="default" podName="nginx"
	Apr 15 17:47:50 addons-961400 kubelet[2119]: E0415 17:47:50.707206    2119 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cc68f864-35da-400f-ab57-41f570e2b67d" containerName="busybox"
	Apr 15 17:47:50 addons-961400 kubelet[2119]: E0415 17:47:50.707280    2119 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f5730ee6-1646-4c69-a454-1c22681d47f0" containerName="tiller"
	Apr 15 17:47:50 addons-961400 kubelet[2119]: I0415 17:47:50.707371    2119 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc68f864-35da-400f-ab57-41f570e2b67d" containerName="busybox"
	Apr 15 17:47:50 addons-961400 kubelet[2119]: I0415 17:47:50.707438    2119 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5730ee6-1646-4c69-a454-1c22681d47f0" containerName="tiller"
	Apr 15 17:47:50 addons-961400 kubelet[2119]: I0415 17:47:50.789992    2119 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6e4d2296-90df-40fb-a92c-7c5f725e78c4-gcp-creds\") pod \"nginx\" (UID: \"6e4d2296-90df-40fb-a92c-7c5f725e78c4\") " pod="default/nginx"
	Apr 15 17:47:50 addons-961400 kubelet[2119]: I0415 17:47:50.790169    2119 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd8qr\" (UniqueName: \"kubernetes.io/projected/6e4d2296-90df-40fb-a92c-7c5f725e78c4-kube-api-access-fd8qr\") pod \"nginx\" (UID: \"6e4d2296-90df-40fb-a92c-7c5f725e78c4\") " pod="default/nginx"
	Apr 15 17:47:51 addons-961400 kubelet[2119]: I0415 17:47:51.494878    2119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b55bfb601b3f8b93534765eb2f69caa5864ed1f9651d66de0f2bf5daedb64cc"
	Apr 15 17:47:51 addons-961400 kubelet[2119]: I0415 17:47:51.926657    2119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5730ee6-1646-4c69-a454-1c22681d47f0" path="/var/lib/kubelet/pods/f5730ee6-1646-4c69-a454-1c22681d47f0/volumes"
	
	
	==> storage-provisioner [4b26692b7ed6] <==
	I0415 17:44:15.475535       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0415 17:44:15.564971       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0415 17:44:15.570817       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0415 17:44:15.641448       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0415 17:44:15.641600       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-961400_78b58fde-8072-4dd9-9766-0c3583dc47b8!
	I0415 17:44:15.649373       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1bedc6cc-b332-43c0-9b6c-8417ddc5793e", APIVersion:"v1", ResourceVersion:"778", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-961400_78b58fde-8072-4dd9-9766-0c3583dc47b8 became leader
	I0415 17:44:15.843096       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-961400_78b58fde-8072-4dd9-9766-0c3583dc47b8!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 17:47:42.741405    8784 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-961400 -n addons-961400
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-961400 -n addons-961400: (13.5361984s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-961400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-52hh5 ingress-nginx-admission-patch-jhsbq
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-961400 describe pod ingress-nginx-admission-create-52hh5 ingress-nginx-admission-patch-jhsbq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-961400 describe pod ingress-nginx-admission-create-52hh5 ingress-nginx-admission-patch-jhsbq: exit status 1 (204.0159ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-52hh5" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-jhsbq" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-961400 describe pod ingress-nginx-admission-create-52hh5 ingress-nginx-admission-patch-jhsbq: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.76s)

                                                
                                    
x
+
TestErrorSpam/setup (214.53s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-199200 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 --driver=hyperv
E0415 17:51:53.535848   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
E0415 17:51:53.550572   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
E0415 17:51:53.561096   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
E0415 17:51:53.596440   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
E0415 17:51:53.643343   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
E0415 17:51:53.737274   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
E0415 17:51:53.908092   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
E0415 17:51:54.238694   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
E0415 17:51:54.888134   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
E0415 17:51:56.174295   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
E0415 17:51:58.738723   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
E0415 17:52:03.874611   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
E0415 17:52:14.124060   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
E0415 17:52:34.607488   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
E0415 17:53:15.575941   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
E0415 17:54:37.510666   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-199200 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 --driver=hyperv: (3m34.530497s)
error_spam_test.go:96: unexpected stderr: "W0415 17:51:28.399046     800 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-199200] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
- KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
- MINIKUBE_LOCATION=18634
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-199200" primary control-plane node in "nospam-199200" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-199200" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0415 17:51:28.399046     800 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (214.53s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (37.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-831100 -n functional-831100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-831100 -n functional-831100: (13.1319507s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 logs -n 25: (9.4420848s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        |    Version     |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	| pause   | nospam-199200 --log_dir                                     | nospam-199200     | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:56 UTC | 15 Apr 24 17:56 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 |                   |                   |                |                     |                     |
	|         | pause                                                       |                   |                   |                |                     |                     |
	| unpause | nospam-199200 --log_dir                                     | nospam-199200     | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:56 UTC | 15 Apr 24 17:56 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 |                   |                   |                |                     |                     |
	|         | unpause                                                     |                   |                   |                |                     |                     |
	| unpause | nospam-199200 --log_dir                                     | nospam-199200     | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:56 UTC | 15 Apr 24 17:56 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 |                   |                   |                |                     |                     |
	|         | unpause                                                     |                   |                   |                |                     |                     |
	| unpause | nospam-199200 --log_dir                                     | nospam-199200     | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:56 UTC | 15 Apr 24 17:56 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 |                   |                   |                |                     |                     |
	|         | unpause                                                     |                   |                   |                |                     |                     |
	| stop    | nospam-199200 --log_dir                                     | nospam-199200     | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:56 UTC | 15 Apr 24 17:57 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 |                   |                   |                |                     |                     |
	|         | stop                                                        |                   |                   |                |                     |                     |
	| stop    | nospam-199200 --log_dir                                     | nospam-199200     | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:57 UTC | 15 Apr 24 17:57 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 |                   |                   |                |                     |                     |
	|         | stop                                                        |                   |                   |                |                     |                     |
	| stop    | nospam-199200 --log_dir                                     | nospam-199200     | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:57 UTC | 15 Apr 24 17:57 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 |                   |                   |                |                     |                     |
	|         | stop                                                        |                   |                   |                |                     |                     |
	| delete  | -p nospam-199200                                            | nospam-199200     | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:58 UTC | 15 Apr 24 17:58 UTC |
	| start   | -p functional-831100                                        | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:58 UTC | 15 Apr 24 18:02 UTC |
	|         | --memory=4000                                               |                   |                   |                |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |                |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |                |                     |                     |
	| start   | -p functional-831100                                        | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:02 UTC | 15 Apr 24 18:04 UTC |
	|         | --alsologtostderr -v=8                                      |                   |                   |                |                     |                     |
	| cache   | functional-831100 cache add                                 | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:04 UTC | 15 Apr 24 18:04 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |                |                     |                     |
	| cache   | functional-831100 cache add                                 | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:04 UTC | 15 Apr 24 18:05 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |                |                     |                     |
	| cache   | functional-831100 cache add                                 | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:05 UTC | 15 Apr 24 18:05 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |                |                     |                     |
	| cache   | functional-831100 cache add                                 | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:05 UTC | 15 Apr 24 18:05 UTC |
	|         | minikube-local-cache-test:functional-831100                 |                   |                   |                |                     |                     |
	| cache   | functional-831100 cache delete                              | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:05 UTC | 15 Apr 24 18:05 UTC |
	|         | minikube-local-cache-test:functional-831100                 |                   |                   |                |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:05 UTC | 15 Apr 24 18:05 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |                |                     |                     |
	| cache   | list                                                        | minikube          | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:05 UTC | 15 Apr 24 18:05 UTC |
	| ssh     | functional-831100 ssh sudo                                  | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:05 UTC | 15 Apr 24 18:05 UTC |
	|         | crictl images                                               |                   |                   |                |                     |                     |
	| ssh     | functional-831100                                           | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:05 UTC | 15 Apr 24 18:05 UTC |
	|         | ssh sudo docker rmi                                         |                   |                   |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |                |                     |                     |
	| ssh     | functional-831100 ssh                                       | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:05 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |                |                     |                     |
	| cache   | functional-831100 cache reload                              | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:05 UTC | 15 Apr 24 18:06 UTC |
	| ssh     | functional-831100 ssh                                       | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:06 UTC | 15 Apr 24 18:06 UTC |
	|         | sudo crictl inspecti                                        |                   |                   |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |                |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:06 UTC | 15 Apr 24 18:06 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |                |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:06 UTC | 15 Apr 24 18:06 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |                |                     |                     |
	| kubectl | functional-831100 kubectl --                                | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:06 UTC | 15 Apr 24 18:06 UTC |
	|         | --context functional-831100                                 |                   |                   |                |                     |                     |
	|         | get pods                                                    |                   |                   |                |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 18:02:31
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 18:02:31.694714    7372 out.go:291] Setting OutFile to fd 1012 ...
	I0415 18:02:31.696211    7372 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:02:31.696211    7372 out.go:304] Setting ErrFile to fd 908...
	I0415 18:02:31.696211    7372 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:02:31.725618    7372 out.go:298] Setting JSON to false
	I0415 18:02:31.729154    7372 start.go:129] hostinfo: {"hostname":"minikube6","uptime":15878,"bootTime":1713188273,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0415 18:02:31.730074    7372 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 18:02:31.733378    7372 out.go:177] * [functional-831100] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0415 18:02:31.737593    7372 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 18:02:31.737593    7372 notify.go:220] Checking for updates...
	I0415 18:02:31.740556    7372 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 18:02:31.749682    7372 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0415 18:02:31.757146    7372 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 18:02:31.761208    7372 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 18:02:31.765356    7372 config.go:182] Loaded profile config "functional-831100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:02:31.765759    7372 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 18:02:37.508837    7372 out.go:177] * Using the hyperv driver based on existing profile
	I0415 18:02:37.512517    7372 start.go:297] selected driver: hyperv
	I0415 18:02:37.512648    7372 start.go:901] validating driver "hyperv" against &{Name:functional-831100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.3 ClusterName:functional-831100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.62.76 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 18:02:37.512752    7372 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 18:02:37.566615    7372 cni.go:84] Creating CNI manager for ""
	I0415 18:02:37.566615    7372 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 18:02:37.567238    7372 start.go:340] cluster config:
	{Name:functional-831100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-831100 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.62.76 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 18:02:37.567238    7372 iso.go:125] acquiring lock: {Name:mkb11aac800c033551a31c7a773c0461f92e4459 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 18:02:37.572293    7372 out.go:177] * Starting "functional-831100" primary control-plane node in "functional-831100" cluster
	I0415 18:02:37.574730    7372 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:02:37.574730    7372 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 18:02:37.574730    7372 cache.go:56] Caching tarball of preloaded images
	I0415 18:02:37.575748    7372 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 18:02:37.575748    7372 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 18:02:37.575748    7372 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\config.json ...
	I0415 18:02:37.578614    7372 start.go:360] acquireMachinesLock for functional-831100: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 18:02:37.578614    7372 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-831100"
	I0415 18:02:37.578614    7372 start.go:96] Skipping create...Using existing machine configuration
	I0415 18:02:37.578614    7372 fix.go:54] fixHost starting: 
	I0415 18:02:37.578614    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-831100 ).state
	I0415 18:02:40.602800    7372 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:02:40.603582    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:02:40.603582    7372 fix.go:112] recreateIfNeeded on functional-831100: state=Running err=<nil>
	W0415 18:02:40.603582    7372 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 18:02:40.607704    7372 out.go:177] * Updating the running hyperv "functional-831100" VM ...
	I0415 18:02:40.611089    7372 machine.go:94] provisionDockerMachine start ...
	I0415 18:02:40.611089    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-831100 ).state
	I0415 18:02:42.959330    7372 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:02:42.959330    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:02:42.960061    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-831100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:02:45.765977    7372 main.go:141] libmachine: [stdout =====>] : 172.19.62.76
	
	I0415 18:02:45.766347    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:02:45.773552    7372 main.go:141] libmachine: Using SSH client type: native
	I0415 18:02:45.774130    7372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.62.76 22 <nil> <nil>}
	I0415 18:02:45.774235    7372 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 18:02:45.920799    7372 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-831100
	
	I0415 18:02:45.921337    7372 buildroot.go:166] provisioning hostname "functional-831100"
	I0415 18:02:45.921546    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-831100 ).state
	I0415 18:02:48.244477    7372 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:02:48.244477    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:02:48.244658    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-831100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:02:51.020475    7372 main.go:141] libmachine: [stdout =====>] : 172.19.62.76
	
	I0415 18:02:51.020559    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:02:51.027760    7372 main.go:141] libmachine: Using SSH client type: native
	I0415 18:02:51.028429    7372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.62.76 22 <nil> <nil>}
	I0415 18:02:51.028429    7372 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-831100 && echo "functional-831100" | sudo tee /etc/hostname
	I0415 18:02:51.204035    7372 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-831100
	
	I0415 18:02:51.204035    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-831100 ).state
	I0415 18:02:53.572920    7372 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:02:53.572920    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:02:53.572920    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-831100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:02:56.354875    7372 main.go:141] libmachine: [stdout =====>] : 172.19.62.76
	
	I0415 18:02:56.354875    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:02:56.362531    7372 main.go:141] libmachine: Using SSH client type: native
	I0415 18:02:56.363254    7372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.62.76 22 <nil> <nil>}
	I0415 18:02:56.363254    7372 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-831100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-831100/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-831100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 18:02:56.503995    7372 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 18:02:56.504127    7372 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0415 18:02:56.504226    7372 buildroot.go:174] setting up certificates
	I0415 18:02:56.504271    7372 provision.go:84] configureAuth start
	I0415 18:02:56.504398    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-831100 ).state
	I0415 18:02:58.837017    7372 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:02:58.837017    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:02:58.837118    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-831100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:03:01.615942    7372 main.go:141] libmachine: [stdout =====>] : 172.19.62.76
	
	I0415 18:03:01.615942    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:03:01.615942    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-831100 ).state
	I0415 18:03:03.942844    7372 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:03:03.942844    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:03:03.943332    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-831100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:03:06.729295    7372 main.go:141] libmachine: [stdout =====>] : 172.19.62.76
	
	I0415 18:03:06.729295    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:03:06.729370    7372 provision.go:143] copyHostCerts
	I0415 18:03:06.729437    7372 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0415 18:03:06.729437    7372 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0415 18:03:06.729437    7372 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0415 18:03:06.730212    7372 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0415 18:03:06.731427    7372 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0415 18:03:06.731577    7372 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0415 18:03:06.731577    7372 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0415 18:03:06.731577    7372 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0415 18:03:06.733209    7372 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0415 18:03:06.733442    7372 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0415 18:03:06.733442    7372 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0415 18:03:06.733975    7372 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0415 18:03:06.735192    7372 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-831100 san=[127.0.0.1 172.19.62.76 functional-831100 localhost minikube]
	I0415 18:03:06.845284    7372 provision.go:177] copyRemoteCerts
	I0415 18:03:06.859426    7372 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 18:03:06.859426    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-831100 ).state
	I0415 18:03:09.188574    7372 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:03:09.188574    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:03:09.188739    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-831100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:03:11.965787    7372 main.go:141] libmachine: [stdout =====>] : 172.19.62.76
	
	I0415 18:03:11.965787    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:03:11.967362    7372 sshutil.go:53] new ssh client: &{IP:172.19.62.76 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-831100\id_rsa Username:docker}
	I0415 18:03:12.073674    7372 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.2141155s)
	I0415 18:03:12.073756    7372 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0415 18:03:12.074570    7372 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 18:03:12.132640    7372 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0415 18:03:12.133335    7372 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0415 18:03:12.185150    7372 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0415 18:03:12.185150    7372 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0415 18:03:12.237484    7372 provision.go:87] duration metric: took 15.7330871s to configureAuth
	I0415 18:03:12.237484    7372 buildroot.go:189] setting minikube options for container-runtime
	I0415 18:03:12.238664    7372 config.go:182] Loaded profile config "functional-831100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:03:12.238664    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-831100 ).state
	I0415 18:03:14.539538    7372 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:03:14.539652    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:03:14.539704    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-831100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:03:17.288255    7372 main.go:141] libmachine: [stdout =====>] : 172.19.62.76
	
	I0415 18:03:17.288255    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:03:17.294267    7372 main.go:141] libmachine: Using SSH client type: native
	I0415 18:03:17.294342    7372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.62.76 22 <nil> <nil>}
	I0415 18:03:17.294864    7372 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 18:03:17.430060    7372 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0415 18:03:17.430060    7372 buildroot.go:70] root file system type: tmpfs
	I0415 18:03:17.430060    7372 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 18:03:17.430060    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-831100 ).state
	I0415 18:03:19.779463    7372 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:03:19.780309    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:03:19.780417    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-831100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:03:22.557651    7372 main.go:141] libmachine: [stdout =====>] : 172.19.62.76
	
	I0415 18:03:22.557651    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:03:22.565504    7372 main.go:141] libmachine: Using SSH client type: native
	I0415 18:03:22.565574    7372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.62.76 22 <nil> <nil>}
	I0415 18:03:22.566154    7372 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 18:03:22.731240    7372 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 18:03:22.731848    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-831100 ).state
	I0415 18:03:25.035022    7372 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:03:25.035022    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:03:25.036096    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-831100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:03:27.788827    7372 main.go:141] libmachine: [stdout =====>] : 172.19.62.76
	
	I0415 18:03:27.789830    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:03:27.795448    7372 main.go:141] libmachine: Using SSH client type: native
	I0415 18:03:27.795448    7372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.62.76 22 <nil> <nil>}
	I0415 18:03:27.795448    7372 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 18:03:27.947101    7372 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 18:03:27.947189    7372 machine.go:97] duration metric: took 47.3357214s to provisionDockerMachine
	I0415 18:03:27.947189    7372 start.go:293] postStartSetup for "functional-831100" (driver="hyperv")
	I0415 18:03:27.947189    7372 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 18:03:27.961780    7372 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 18:03:27.961780    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-831100 ).state
	I0415 18:03:30.243809    7372 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:03:30.243809    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:03:30.244174    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-831100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:03:32.981425    7372 main.go:141] libmachine: [stdout =====>] : 172.19.62.76
	
	I0415 18:03:32.981555    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:03:32.981966    7372 sshutil.go:53] new ssh client: &{IP:172.19.62.76 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-831100\id_rsa Username:docker}
	I0415 18:03:33.087276    7372 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1254549s)
	I0415 18:03:33.105747    7372 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 18:03:33.114264    7372 command_runner.go:130] > NAME=Buildroot
	I0415 18:03:33.114541    7372 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0415 18:03:33.114610    7372 command_runner.go:130] > ID=buildroot
	I0415 18:03:33.114673    7372 command_runner.go:130] > VERSION_ID=2023.02.9
	I0415 18:03:33.114673    7372 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0415 18:03:33.114756    7372 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 18:03:33.114820    7372 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0415 18:03:33.115103    7372 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0415 18:03:33.115784    7372 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> 112722.pem in /etc/ssl/certs
	I0415 18:03:33.115784    7372 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /etc/ssl/certs/112722.pem
	I0415 18:03:33.117282    7372 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\11272\hosts -> hosts in /etc/test/nested/copy/11272
	I0415 18:03:33.117282    7372 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\11272\hosts -> /etc/test/nested/copy/11272/hosts
	I0415 18:03:33.130850    7372 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11272
	I0415 18:03:33.150882    7372 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /etc/ssl/certs/112722.pem (1708 bytes)
	I0415 18:03:33.202233    7372 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\11272\hosts --> /etc/test/nested/copy/11272/hosts (40 bytes)
	I0415 18:03:33.252516    7372 start.go:296] duration metric: took 5.3052857s for postStartSetup
	I0415 18:03:33.252685    7372 fix.go:56] duration metric: took 55.6736266s for fixHost
	I0415 18:03:33.252799    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-831100 ).state
	I0415 18:03:35.521469    7372 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:03:35.521469    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:03:35.522171    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-831100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:03:38.239271    7372 main.go:141] libmachine: [stdout =====>] : 172.19.62.76
	
	I0415 18:03:38.239271    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:03:38.246119    7372 main.go:141] libmachine: Using SSH client type: native
	I0415 18:03:38.246799    7372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.62.76 22 <nil> <nil>}
	I0415 18:03:38.246869    7372 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0415 18:03:38.383329    7372 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713204218.390850007
	
	I0415 18:03:38.383329    7372 fix.go:216] guest clock: 1713204218.390850007
	I0415 18:03:38.383329    7372 fix.go:229] Guest: 2024-04-15 18:03:38.390850007 +0000 UTC Remote: 2024-04-15 18:03:33.2526853 +0000 UTC m=+61.743517201 (delta=5.138164707s)
	I0415 18:03:38.383609    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-831100 ).state
	I0415 18:03:40.714382    7372 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:03:40.715166    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:03:40.715308    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-831100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:03:43.508641    7372 main.go:141] libmachine: [stdout =====>] : 172.19.62.76
	
	I0415 18:03:43.508641    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:03:43.516042    7372 main.go:141] libmachine: Using SSH client type: native
	I0415 18:03:43.516193    7372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.62.76 22 <nil> <nil>}
	I0415 18:03:43.516193    7372 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713204218
	I0415 18:03:43.673086    7372 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 15 18:03:38 UTC 2024
	
	I0415 18:03:43.673186    7372 fix.go:236] clock set: Mon Apr 15 18:03:38 UTC 2024
	 (err=<nil>)
	I0415 18:03:43.673186    7372 start.go:83] releasing machines lock for "functional-831100", held for 1m6.0940446s
	I0415 18:03:43.673446    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-831100 ).state
	I0415 18:03:45.986792    7372 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:03:45.987689    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:03:45.987689    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-831100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:03:48.737638    7372 main.go:141] libmachine: [stdout =====>] : 172.19.62.76
	
	I0415 18:03:48.737638    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:03:48.742455    7372 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 18:03:48.742643    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-831100 ).state
	I0415 18:03:48.754752    7372 ssh_runner.go:195] Run: cat /version.json
	I0415 18:03:48.755766    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-831100 ).state
	I0415 18:03:51.156977    7372 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:03:51.156977    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:03:51.156977    7372 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:03:51.156977    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:03:51.157126    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-831100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:03:51.157126    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-831100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:03:53.986641    7372 main.go:141] libmachine: [stdout =====>] : 172.19.62.76
	
	I0415 18:03:53.986641    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:03:53.987255    7372 sshutil.go:53] new ssh client: &{IP:172.19.62.76 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-831100\id_rsa Username:docker}
	I0415 18:03:54.057637    7372 main.go:141] libmachine: [stdout =====>] : 172.19.62.76
	
	I0415 18:03:54.057637    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:03:54.058161    7372 sshutil.go:53] new ssh client: &{IP:172.19.62.76 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-831100\id_rsa Username:docker}
	I0415 18:03:54.149724    7372 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0415 18:03:54.149724    7372 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.4071487s)
	I0415 18:03:54.150860    7372 command_runner.go:130] > {"iso_version": "v1.33.0-1713175573-18634", "kicbase_version": "v0.0.43-1712854342-18621", "minikube_version": "v1.33.0-beta.0", "commit": "0ece0b4c602cbaab0821f0ba2d6ec4a07a392655"}
	I0415 18:03:54.150860    7372 ssh_runner.go:235] Completed: cat /version.json: (5.3950518s)
	I0415 18:03:54.167019    7372 ssh_runner.go:195] Run: systemctl --version
	I0415 18:03:54.178622    7372 command_runner.go:130] > systemd 252 (252)
	I0415 18:03:54.178777    7372 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0415 18:03:54.193888    7372 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0415 18:03:54.201961    7372 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0415 18:03:54.203706    7372 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 18:03:54.220227    7372 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 18:03:54.240205    7372 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0415 18:03:54.240205    7372 start.go:494] detecting cgroup driver to use...
	I0415 18:03:54.240205    7372 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:03:54.281119    7372 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0415 18:03:54.296324    7372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0415 18:03:54.342832    7372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 18:03:54.365118    7372 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 18:03:54.379458    7372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 18:03:54.415256    7372 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:03:54.454776    7372 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 18:03:54.493706    7372 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:03:54.532809    7372 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 18:03:54.569382    7372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 18:03:54.605625    7372 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 18:03:54.643743    7372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 18:03:54.677755    7372 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 18:03:54.697759    7372 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0415 18:03:54.711526    7372 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 18:03:54.747099    7372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:03:55.066760    7372 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 18:03:55.107189    7372 start.go:494] detecting cgroup driver to use...
	I0415 18:03:55.122231    7372 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0415 18:03:55.153535    7372 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0415 18:03:55.153535    7372 command_runner.go:130] > [Unit]
	I0415 18:03:55.153535    7372 command_runner.go:130] > Description=Docker Application Container Engine
	I0415 18:03:55.153535    7372 command_runner.go:130] > Documentation=https://docs.docker.com
	I0415 18:03:55.153535    7372 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0415 18:03:55.153535    7372 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0415 18:03:55.153535    7372 command_runner.go:130] > StartLimitBurst=3
	I0415 18:03:55.153535    7372 command_runner.go:130] > StartLimitIntervalSec=60
	I0415 18:03:55.153535    7372 command_runner.go:130] > [Service]
	I0415 18:03:55.153535    7372 command_runner.go:130] > Type=notify
	I0415 18:03:55.153535    7372 command_runner.go:130] > Restart=on-failure
	I0415 18:03:55.153535    7372 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0415 18:03:55.153535    7372 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0415 18:03:55.153535    7372 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0415 18:03:55.153535    7372 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0415 18:03:55.153535    7372 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0415 18:03:55.153535    7372 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0415 18:03:55.153535    7372 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0415 18:03:55.153535    7372 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0415 18:03:55.153535    7372 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0415 18:03:55.153535    7372 command_runner.go:130] > ExecStart=
	I0415 18:03:55.153535    7372 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0415 18:03:55.153535    7372 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0415 18:03:55.154064    7372 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0415 18:03:55.154064    7372 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0415 18:03:55.154064    7372 command_runner.go:130] > LimitNOFILE=infinity
	I0415 18:03:55.154113    7372 command_runner.go:130] > LimitNPROC=infinity
	I0415 18:03:55.154113    7372 command_runner.go:130] > LimitCORE=infinity
	I0415 18:03:55.154113    7372 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0415 18:03:55.154157    7372 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0415 18:03:55.154157    7372 command_runner.go:130] > TasksMax=infinity
	I0415 18:03:55.154157    7372 command_runner.go:130] > TimeoutStartSec=0
	I0415 18:03:55.154157    7372 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0415 18:03:55.154157    7372 command_runner.go:130] > Delegate=yes
	I0415 18:03:55.154213    7372 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0415 18:03:55.154213    7372 command_runner.go:130] > KillMode=process
	I0415 18:03:55.154213    7372 command_runner.go:130] > [Install]
	I0415 18:03:55.154213    7372 command_runner.go:130] > WantedBy=multi-user.target
	I0415 18:03:55.168337    7372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:03:55.212120    7372 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 18:03:55.270150    7372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:03:55.316807    7372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:03:55.343531    7372 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:03:55.381728    7372 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0415 18:03:55.397572    7372 ssh_runner.go:195] Run: which cri-dockerd
	I0415 18:03:55.405772    7372 command_runner.go:130] > /usr/bin/cri-dockerd
	I0415 18:03:55.419669    7372 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0415 18:03:55.440570    7372 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0415 18:03:55.496894    7372 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0415 18:03:55.778596    7372 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0415 18:03:56.067764    7372 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0415 18:03:56.068000    7372 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0415 18:03:56.133217    7372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:03:56.417771    7372 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 18:04:09.461947    7372 ssh_runner.go:235] Completed: sudo systemctl restart docker: (13.0440734s)
	I0415 18:04:09.474933    7372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0415 18:04:09.518457    7372 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0415 18:04:09.571456    7372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 18:04:09.610446    7372 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0415 18:04:09.844652    7372 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0415 18:04:10.091906    7372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:04:10.355521    7372 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0415 18:04:10.401446    7372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 18:04:10.440702    7372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:04:10.676682    7372 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0415 18:04:10.839492    7372 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0415 18:04:10.853690    7372 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0415 18:04:10.863555    7372 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0415 18:04:10.863555    7372 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0415 18:04:10.863555    7372 command_runner.go:130] > Device: 0,22	Inode: 1534        Links: 1
	I0415 18:04:10.863555    7372 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0415 18:04:10.863649    7372 command_runner.go:130] > Access: 2024-04-15 18:04:10.825259773 +0000
	I0415 18:04:10.863649    7372 command_runner.go:130] > Modify: 2024-04-15 18:04:10.725257115 +0000
	I0415 18:04:10.863649    7372 command_runner.go:130] > Change: 2024-04-15 18:04:10.730257247 +0000
	I0415 18:04:10.863681    7372 command_runner.go:130] >  Birth: -
	I0415 18:04:10.863681    7372 start.go:562] Will wait 60s for crictl version
	I0415 18:04:10.877182    7372 ssh_runner.go:195] Run: which crictl
	I0415 18:04:10.884034    7372 command_runner.go:130] > /usr/bin/crictl
	I0415 18:04:10.898063    7372 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0415 18:04:10.956879    7372 command_runner.go:130] > Version:  0.1.0
	I0415 18:04:10.956879    7372 command_runner.go:130] > RuntimeName:  docker
	I0415 18:04:10.956879    7372 command_runner.go:130] > RuntimeVersion:  26.0.0
	I0415 18:04:10.956879    7372 command_runner.go:130] > RuntimeApiVersion:  v1
	I0415 18:04:10.956879    7372 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0415 18:04:10.966875    7372 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 18:04:11.002989    7372 command_runner.go:130] > 26.0.0
	I0415 18:04:11.014292    7372 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 18:04:11.049359    7372 command_runner.go:130] > 26.0.0
	I0415 18:04:11.054569    7372 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0415 18:04:11.054771    7372 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0415 18:04:11.060202    7372 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0415 18:04:11.060202    7372 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0415 18:04:11.060202    7372 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0415 18:04:11.060202    7372 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:d7:0e Flags:up|broadcast|multicast|running}
	I0415 18:04:11.063203    7372 ip.go:210] interface addr: fe80::6b0:6318:bc6e:fcda/64
	I0415 18:04:11.063203    7372 ip.go:210] interface addr: 172.19.48.1/20
	I0415 18:04:11.076899    7372 ssh_runner.go:195] Run: grep 172.19.48.1	host.minikube.internal$ /etc/hosts
	I0415 18:04:11.082862    7372 command_runner.go:130] > 172.19.48.1	host.minikube.internal
	I0415 18:04:11.083792    7372 kubeadm.go:877] updating cluster {Name:functional-831100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.29.3 ClusterName:functional-831100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.62.76 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0415 18:04:11.083792    7372 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:04:11.097115    7372 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 18:04:11.123947    7372 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.29.3
	I0415 18:04:11.123947    7372 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.29.3
	I0415 18:04:11.123947    7372 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.29.3
	I0415 18:04:11.123947    7372 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.29.3
	I0415 18:04:11.123947    7372 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0415 18:04:11.123947    7372 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0415 18:04:11.124112    7372 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0415 18:04:11.124112    7372 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 18:04:11.124212    7372 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0415 18:04:11.124264    7372 docker.go:615] Images already preloaded, skipping extraction
	I0415 18:04:11.135719    7372 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 18:04:11.163659    7372 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.29.3
	I0415 18:04:11.163738    7372 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.29.3
	I0415 18:04:11.163738    7372 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.29.3
	I0415 18:04:11.163738    7372 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.29.3
	I0415 18:04:11.163738    7372 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0415 18:04:11.163738    7372 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0415 18:04:11.163738    7372 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0415 18:04:11.163738    7372 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 18:04:11.163867    7372 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0415 18:04:11.163921    7372 cache_images.go:84] Images are preloaded, skipping loading
	I0415 18:04:11.163957    7372 kubeadm.go:928] updating node { 172.19.62.76 8441 v1.29.3 docker true true} ...
	I0415 18:04:11.164249    7372 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-831100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.62.76
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:functional-831100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0415 18:04:11.176646    7372 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0415 18:04:11.211347    7372 command_runner.go:130] > cgroupfs
	I0415 18:04:11.211723    7372 cni.go:84] Creating CNI manager for ""
	I0415 18:04:11.211862    7372 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 18:04:11.211930    7372 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0415 18:04:11.211964    7372 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.62.76 APIServerPort:8441 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-831100 NodeName:functional-831100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.62.76"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.62.76 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0415 18:04:11.212842    7372 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.62.76
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-831100"
	  kubeletExtraArgs:
	    node-ip: 172.19.62.76
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.62.76"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0415 18:04:11.229881    7372 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0415 18:04:11.250621    7372 command_runner.go:130] > kubeadm
	I0415 18:04:11.250621    7372 command_runner.go:130] > kubectl
	I0415 18:04:11.250621    7372 command_runner.go:130] > kubelet
	I0415 18:04:11.250621    7372 binaries.go:44] Found k8s binaries, skipping transfer
	I0415 18:04:11.266078    7372 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0415 18:04:11.283671    7372 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0415 18:04:11.318125    7372 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0415 18:04:11.353020    7372 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0415 18:04:11.403585    7372 ssh_runner.go:195] Run: grep 172.19.62.76	control-plane.minikube.internal$ /etc/hosts
	I0415 18:04:11.410021    7372 command_runner.go:130] > 172.19.62.76	control-plane.minikube.internal
	I0415 18:04:11.423110    7372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:04:11.675071    7372 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 18:04:11.705446    7372 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100 for IP: 172.19.62.76
	I0415 18:04:11.705621    7372 certs.go:194] generating shared ca certs ...
	I0415 18:04:11.705621    7372 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:04:11.706032    7372 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0415 18:04:11.706567    7372 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0415 18:04:11.706825    7372 certs.go:256] generating profile certs ...
	I0415 18:04:11.709110    7372 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.key
	I0415 18:04:11.709682    7372 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\apiserver.key.c1e6532a
	I0415 18:04:11.710066    7372 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\proxy-client.key
	I0415 18:04:11.710066    7372 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0415 18:04:11.710450    7372 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0415 18:04:11.710618    7372 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0415 18:04:11.710824    7372 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0415 18:04:11.711008    7372 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0415 18:04:11.711214    7372 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0415 18:04:11.711406    7372 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0415 18:04:11.711573    7372 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0415 18:04:11.712164    7372 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem (1338 bytes)
	W0415 18:04:11.712699    7372 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272_empty.pem, impossibly tiny 0 bytes
	I0415 18:04:11.712699    7372 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0415 18:04:11.712932    7372 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0415 18:04:11.713474    7372 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0415 18:04:11.713672    7372 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0415 18:04:11.714201    7372 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem (1708 bytes)
	I0415 18:04:11.714424    7372 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /usr/share/ca-certificates/112722.pem
	I0415 18:04:11.714424    7372 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:04:11.714424    7372 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem -> /usr/share/ca-certificates/11272.pem
	I0415 18:04:11.715166    7372 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0415 18:04:11.770144    7372 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0415 18:04:11.821853    7372 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0415 18:04:11.871855    7372 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0415 18:04:11.926313    7372 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0415 18:04:11.985589    7372 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0415 18:04:12.032004    7372 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0415 18:04:12.085888    7372 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0415 18:04:12.137331    7372 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /usr/share/ca-certificates/112722.pem (1708 bytes)
	I0415 18:04:12.191577    7372 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0415 18:04:12.245265    7372 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem --> /usr/share/ca-certificates/11272.pem (1338 bytes)
	I0415 18:04:12.294595    7372 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0415 18:04:12.343358    7372 ssh_runner.go:195] Run: openssl version
	I0415 18:04:12.352552    7372 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0415 18:04:12.368332    7372 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112722.pem && ln -fs /usr/share/ca-certificates/112722.pem /etc/ssl/certs/112722.pem"
	I0415 18:04:12.406080    7372 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112722.pem
	I0415 18:04:12.414580    7372 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 15 17:58 /usr/share/ca-certificates/112722.pem
	I0415 18:04:12.414580    7372 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 17:58 /usr/share/ca-certificates/112722.pem
	I0415 18:04:12.431799    7372 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112722.pem
	I0415 18:04:12.441824    7372 command_runner.go:130] > 3ec20f2e
	I0415 18:04:12.460912    7372 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112722.pem /etc/ssl/certs/3ec20f2e.0"
	I0415 18:04:12.495226    7372 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0415 18:04:12.531458    7372 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:04:12.539164    7372 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 15 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:04:12.539440    7372 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:04:12.554071    7372 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:04:12.563665    7372 command_runner.go:130] > b5213941
	I0415 18:04:12.579545    7372 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0415 18:04:12.612997    7372 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11272.pem && ln -fs /usr/share/ca-certificates/11272.pem /etc/ssl/certs/11272.pem"
	I0415 18:04:12.649417    7372 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11272.pem
	I0415 18:04:12.660124    7372 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 15 17:58 /usr/share/ca-certificates/11272.pem
	I0415 18:04:12.660355    7372 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 17:58 /usr/share/ca-certificates/11272.pem
	I0415 18:04:12.673806    7372 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11272.pem
	I0415 18:04:12.685421    7372 command_runner.go:130] > 51391683
	I0415 18:04:12.700315    7372 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11272.pem /etc/ssl/certs/51391683.0"
	I0415 18:04:12.734680    7372 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 18:04:12.742231    7372 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 18:04:12.742528    7372 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0415 18:04:12.742528    7372 command_runner.go:130] > Device: 8,1	Inode: 3149102     Links: 1
	I0415 18:04:12.742528    7372 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0415 18:04:12.742528    7372 command_runner.go:130] > Access: 2024-04-15 18:01:21.015497140 +0000
	I0415 18:04:12.742528    7372 command_runner.go:130] > Modify: 2024-04-15 18:01:21.015497140 +0000
	I0415 18:04:12.742528    7372 command_runner.go:130] > Change: 2024-04-15 18:01:21.015497140 +0000
	I0415 18:04:12.742528    7372 command_runner.go:130] >  Birth: 2024-04-15 18:01:21.015497140 +0000
	I0415 18:04:12.757127    7372 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0415 18:04:12.767146    7372 command_runner.go:130] > Certificate will not expire
	I0415 18:04:12.781660    7372 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0415 18:04:12.789690    7372 command_runner.go:130] > Certificate will not expire
	I0415 18:04:12.803011    7372 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0415 18:04:12.813443    7372 command_runner.go:130] > Certificate will not expire
	I0415 18:04:12.828839    7372 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0415 18:04:12.838372    7372 command_runner.go:130] > Certificate will not expire
	I0415 18:04:12.852714    7372 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0415 18:04:12.863373    7372 command_runner.go:130] > Certificate will not expire
	I0415 18:04:12.877968    7372 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0415 18:04:12.889979    7372 command_runner.go:130] > Certificate will not expire
	I0415 18:04:12.890385    7372 kubeadm.go:391] StartCluster: {Name:functional-831100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
29.3 ClusterName:functional-831100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.62.76 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 18:04:12.904252    7372 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0415 18:04:12.945100    7372 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0415 18:04:12.979258    7372 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0415 18:04:12.979258    7372 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0415 18:04:12.979258    7372 command_runner.go:130] > /var/lib/minikube/etcd:
	I0415 18:04:12.979258    7372 command_runner.go:130] > member
	W0415 18:04:12.979258    7372 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0415 18:04:12.979258    7372 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0415 18:04:12.979258    7372 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0415 18:04:12.994410    7372 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0415 18:04:13.025114    7372 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0415 18:04:13.026577    7372 kubeconfig.go:125] found "functional-831100" server: "https://172.19.62.76:8441"
	I0415 18:04:13.028216    7372 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 18:04:13.028520    7372 kapi.go:59] client config for functional-831100: &rest.Config{Host:"https://172.19.62.76:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-831100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-831100\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f71600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0415 18:04:13.030702    7372 cert_rotation.go:137] Starting client certificate rotation controller
	I0415 18:04:13.044757    7372 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0415 18:04:13.065211    7372 kubeadm.go:624] The running cluster does not require reconfiguration: 172.19.62.76
	I0415 18:04:13.065286    7372 kubeadm.go:1154] stopping kube-system containers ...
	I0415 18:04:13.077277    7372 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0415 18:04:13.124246    7372 command_runner.go:130] > 75a1acb33c12
	I0415 18:04:13.124367    7372 command_runner.go:130] > 7ad3153f9e9d
	I0415 18:04:13.124367    7372 command_runner.go:130] > da75672ff19a
	I0415 18:04:13.124367    7372 command_runner.go:130] > f28bec73517a
	I0415 18:04:13.124367    7372 command_runner.go:130] > fec28243b30d
	I0415 18:04:13.124367    7372 command_runner.go:130] > 6bc4a2c98c17
	I0415 18:04:13.124367    7372 command_runner.go:130] > 438e7aa22ff1
	I0415 18:04:13.124367    7372 command_runner.go:130] > c902f023614f
	I0415 18:04:13.124367    7372 command_runner.go:130] > 698fa3050fb3
	I0415 18:04:13.124367    7372 command_runner.go:130] > 765386ae687c
	I0415 18:04:13.124367    7372 command_runner.go:130] > 8cf9693690cd
	I0415 18:04:13.124499    7372 command_runner.go:130] > 1c40010a4a72
	I0415 18:04:13.124523    7372 command_runner.go:130] > 9d2c2ef3c426
	I0415 18:04:13.124523    7372 command_runner.go:130] > fae332a0ecc2
	I0415 18:04:13.124523    7372 docker.go:483] Stopping containers: [75a1acb33c12 7ad3153f9e9d da75672ff19a f28bec73517a fec28243b30d 6bc4a2c98c17 438e7aa22ff1 c902f023614f 698fa3050fb3 765386ae687c 8cf9693690cd 1c40010a4a72 9d2c2ef3c426 fae332a0ecc2]
	I0415 18:04:13.136590    7372 ssh_runner.go:195] Run: docker stop 75a1acb33c12 7ad3153f9e9d da75672ff19a f28bec73517a fec28243b30d 6bc4a2c98c17 438e7aa22ff1 c902f023614f 698fa3050fb3 765386ae687c 8cf9693690cd 1c40010a4a72 9d2c2ef3c426 fae332a0ecc2
	I0415 18:04:13.173542    7372 command_runner.go:130] > 75a1acb33c12
	I0415 18:04:13.173542    7372 command_runner.go:130] > 7ad3153f9e9d
	I0415 18:04:13.173542    7372 command_runner.go:130] > da75672ff19a
	I0415 18:04:13.174301    7372 command_runner.go:130] > f28bec73517a
	I0415 18:04:13.174301    7372 command_runner.go:130] > fec28243b30d
	I0415 18:04:13.174301    7372 command_runner.go:130] > 6bc4a2c98c17
	I0415 18:04:13.174301    7372 command_runner.go:130] > 438e7aa22ff1
	I0415 18:04:13.174301    7372 command_runner.go:130] > c902f023614f
	I0415 18:04:13.174301    7372 command_runner.go:130] > 698fa3050fb3
	I0415 18:04:13.174301    7372 command_runner.go:130] > 765386ae687c
	I0415 18:04:13.174301    7372 command_runner.go:130] > 8cf9693690cd
	I0415 18:04:13.174301    7372 command_runner.go:130] > 1c40010a4a72
	I0415 18:04:13.174301    7372 command_runner.go:130] > 9d2c2ef3c426
	I0415 18:04:13.174301    7372 command_runner.go:130] > fae332a0ecc2
	I0415 18:04:13.188938    7372 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0415 18:04:13.256776    7372 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0415 18:04:13.281458    7372 command_runner.go:130] > -rw------- 1 root root 5647 Apr 15 18:01 /etc/kubernetes/admin.conf
	I0415 18:04:13.281554    7372 command_runner.go:130] > -rw------- 1 root root 5656 Apr 15 18:01 /etc/kubernetes/controller-manager.conf
	I0415 18:04:13.281554    7372 command_runner.go:130] > -rw------- 1 root root 2007 Apr 15 18:01 /etc/kubernetes/kubelet.conf
	I0415 18:04:13.281554    7372 command_runner.go:130] > -rw------- 1 root root 5604 Apr 15 18:01 /etc/kubernetes/scheduler.conf
	I0415 18:04:13.281748    7372 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5647 Apr 15 18:01 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Apr 15 18:01 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Apr 15 18:01 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Apr 15 18:01 /etc/kubernetes/scheduler.conf
	
	I0415 18:04:13.296326    7372 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0415 18:04:13.319376    7372 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0415 18:04:13.334143    7372 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0415 18:04:13.354041    7372 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0415 18:04:13.369437    7372 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0415 18:04:13.388997    7372 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0415 18:04:13.405097    7372 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0415 18:04:13.442254    7372 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0415 18:04:13.466279    7372 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0415 18:04:13.483226    7372 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0415 18:04:13.520581    7372 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0415 18:04:13.546819    7372 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0415 18:04:13.666453    7372 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0415 18:04:13.666519    7372 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0415 18:04:13.666519    7372 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0415 18:04:13.666519    7372 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0415 18:04:13.666519    7372 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0415 18:04:13.666519    7372 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0415 18:04:13.666519    7372 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0415 18:04:13.666614    7372 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0415 18:04:13.666652    7372 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0415 18:04:13.666652    7372 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0415 18:04:13.666652    7372 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0415 18:04:13.666652    7372 command_runner.go:130] > [certs] Using the existing "sa" key
	I0415 18:04:13.666652    7372 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0415 18:04:15.551690    7372 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0415 18:04:15.551917    7372 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0415 18:04:15.551917    7372 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
	I0415 18:04:15.552023    7372 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0415 18:04:15.552023    7372 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0415 18:04:15.552023    7372 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0415 18:04:15.552096    7372 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.8853569s)
	I0415 18:04:15.552142    7372 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0415 18:04:15.930926    7372 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0415 18:04:15.931668    7372 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0415 18:04:15.931714    7372 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0415 18:04:15.931714    7372 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0415 18:04:16.048418    7372 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0415 18:04:16.048629    7372 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0415 18:04:16.048629    7372 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0415 18:04:16.048668    7372 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0415 18:04:16.048668    7372 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0415 18:04:16.181927    7372 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0415 18:04:16.181927    7372 api_server.go:52] waiting for apiserver process to appear ...
	I0415 18:04:16.196022    7372 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 18:04:16.699488    7372 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 18:04:17.206770    7372 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 18:04:17.241427    7372 command_runner.go:130] > 5237
	I0415 18:04:17.242330    7372 api_server.go:72] duration metric: took 1.0603098s to wait for apiserver process to appear ...
	I0415 18:04:17.242330    7372 api_server.go:88] waiting for apiserver healthz status ...
	I0415 18:04:17.242406    7372 api_server.go:253] Checking apiserver healthz at https://172.19.62.76:8441/healthz ...
	I0415 18:04:21.129793    7372 api_server.go:279] https://172.19.62.76:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0415 18:04:21.130350    7372 api_server.go:103] status: https://172.19.62.76:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0415 18:04:21.130350    7372 api_server.go:253] Checking apiserver healthz at https://172.19.62.76:8441/healthz ...
	I0415 18:04:21.283175    7372 api_server.go:279] https://172.19.62.76:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0415 18:04:21.283248    7372 api_server.go:103] status: https://172.19.62.76:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0415 18:04:21.283248    7372 api_server.go:253] Checking apiserver healthz at https://172.19.62.76:8441/healthz ...
	I0415 18:04:21.298906    7372 api_server.go:279] https://172.19.62.76:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0415 18:04:21.298906    7372 api_server.go:103] status: https://172.19.62.76:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0415 18:04:21.748804    7372 api_server.go:253] Checking apiserver healthz at https://172.19.62.76:8441/healthz ...
	I0415 18:04:21.758393    7372 api_server.go:279] https://172.19.62.76:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0415 18:04:21.758698    7372 api_server.go:103] status: https://172.19.62.76:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0415 18:04:22.253043    7372 api_server.go:253] Checking apiserver healthz at https://172.19.62.76:8441/healthz ...
	I0415 18:04:22.263894    7372 api_server.go:279] https://172.19.62.76:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0415 18:04:22.264158    7372 api_server.go:103] status: https://172.19.62.76:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0415 18:04:22.744180    7372 api_server.go:253] Checking apiserver healthz at https://172.19.62.76:8441/healthz ...
	I0415 18:04:22.759343    7372 api_server.go:279] https://172.19.62.76:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0415 18:04:22.759469    7372 api_server.go:103] status: https://172.19.62.76:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0415 18:04:23.248839    7372 api_server.go:253] Checking apiserver healthz at https://172.19.62.76:8441/healthz ...
	I0415 18:04:23.279405    7372 api_server.go:279] https://172.19.62.76:8441/healthz returned 200:
	ok
	I0415 18:04:23.280331    7372 round_trippers.go:463] GET https://172.19.62.76:8441/version
	I0415 18:04:23.280331    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:23.280331    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:23.280548    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:23.305189    7372 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0415 18:04:23.305189    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:23.305189    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:23.305856    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:23.305856    7372 round_trippers.go:580]     Content-Length: 263
	I0415 18:04:23.305856    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:23 GMT
	I0415 18:04:23.305856    7372 round_trippers.go:580]     Audit-Id: 4cd91126-0626-4457-9dbe-2aaf74f3ef25
	I0415 18:04:23.305856    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:23.305856    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:23.305926    7372 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "29",
	  "gitVersion": "v1.29.3",
	  "gitCommit": "6813625b7cd706db5bc7388921be03071e1a492d",
	  "gitTreeState": "clean",
	  "buildDate": "2024-03-14T23:58:36Z",
	  "goVersion": "go1.21.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0415 18:04:23.306138    7372 api_server.go:141] control plane version: v1.29.3
	I0415 18:04:23.306197    7372 api_server.go:131] duration metric: took 6.0637808s to wait for apiserver health ...
	I0415 18:04:23.306197    7372 cni.go:84] Creating CNI manager for ""
	I0415 18:04:23.306255    7372 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 18:04:23.308073    7372 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0415 18:04:23.324901    7372 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0415 18:04:23.348919    7372 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0415 18:04:23.398611    7372 system_pods.go:43] waiting for kube-system pods to appear ...
	I0415 18:04:23.398611    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods
	I0415 18:04:23.398611    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:23.398611    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:23.398611    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:23.409196    7372 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0415 18:04:23.409196    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:23.409196    7372 round_trippers.go:580]     Audit-Id: cc66820a-6330-4fb8-a492-6e621ac27024
	I0415 18:04:23.409196    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:23.409196    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:23.409196    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:23.409196    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:23.409196    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:23 GMT
	I0415 18:04:23.410194    7372 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"592"},"items":[{"metadata":{"name":"coredns-76f75df574-sd42f","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"a05305e5-a9c7-4bee-9329-bc4608f0f7b8","resourceVersion":"577","creationTimestamp":"2024-04-15T18:01:49Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"4805f3d9-11d6-4b14-98bc-8dddffc85ef5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4805f3d9-11d6-4b14-98bc-8dddffc85ef5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51611 chars]
	I0415 18:04:23.415382    7372 system_pods.go:59] 7 kube-system pods found
	I0415 18:04:23.415382    7372 system_pods.go:61] "coredns-76f75df574-sd42f" [a05305e5-a9c7-4bee-9329-bc4608f0f7b8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0415 18:04:23.415382    7372 system_pods.go:61] "etcd-functional-831100" [0151e2e9-8814-43eb-91a8-33221f5e6293] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0415 18:04:23.415382    7372 system_pods.go:61] "kube-apiserver-functional-831100" [3917e8a9-aeeb-4bec-9d3e-01855f643c6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0415 18:04:23.415382    7372 system_pods.go:61] "kube-controller-manager-functional-831100" [67d99219-f151-4281-8f69-ed09b79937d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0415 18:04:23.415382    7372 system_pods.go:61] "kube-proxy-sfdhl" [e82d2eca-3bbb-407f-9639-db448fa365db] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0415 18:04:23.415382    7372 system_pods.go:61] "kube-scheduler-functional-831100" [fc7f4de2-5606-4f85-b9d6-8947a4e27303] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0415 18:04:23.415382    7372 system_pods.go:61] "storage-provisioner" [9494c9a1-8863-43cd-91b2-67524861807c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0415 18:04:23.415382    7372 system_pods.go:74] duration metric: took 16.7708ms to wait for pod list to return data ...
	I0415 18:04:23.415382    7372 node_conditions.go:102] verifying NodePressure condition ...
	I0415 18:04:23.415382    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes
	I0415 18:04:23.415382    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:23.415382    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:23.415382    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:23.421184    7372 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 18:04:23.421349    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:23.421349    7372 round_trippers.go:580]     Audit-Id: 4d04bcb1-ac39-4827-b709-4d4f25a87119
	I0415 18:04:23.421349    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:23.421468    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:23.421490    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:23.421490    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:23.421490    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:23 GMT
	I0415 18:04:23.421796    7372 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"592"},"items":[{"metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"m
anagedFields":[{"manager":"kubelet","operation":"Update","apiVersion":" [truncated 4846 chars]
	I0415 18:04:23.422611    7372 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0415 18:04:23.422611    7372 node_conditions.go:123] node cpu capacity is 2
	I0415 18:04:23.422611    7372 node_conditions.go:105] duration metric: took 7.2287ms to run NodePressure ...
	I0415 18:04:23.422611    7372 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0415 18:04:23.854586    7372 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0415 18:04:23.854586    7372 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0415 18:04:23.854718    7372 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0415 18:04:23.854718    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0415 18:04:23.854718    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:23.854718    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:23.854718    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:23.859780    7372 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 18:04:23.859780    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:23.859780    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:23 GMT
	I0415 18:04:23.859780    7372 round_trippers.go:580]     Audit-Id: 6bae47c8-d5e8-4042-aebe-7e46a89fb57a
	I0415 18:04:23.859780    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:23.859780    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:23.859780    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:23.859780    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:23.861608    7372 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"597"},"items":[{"metadata":{"name":"etcd-functional-831100","namespace":"kube-system","uid":"0151e2e9-8814-43eb-91a8-33221f5e6293","resourceVersion":"581","creationTimestamp":"2024-04-15T18:01:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.62.76:2379","kubernetes.io/config.hash":"86fbc658d61c8213444bfd2b96916082","kubernetes.io/config.mirror":"86fbc658d61c8213444bfd2b96916082","kubernetes.io/config.seen":"2024-04-15T18:01:35.845954198Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 30926 chars]
	I0415 18:04:23.862874    7372 kubeadm.go:733] kubelet initialised
	I0415 18:04:23.862874    7372 kubeadm.go:734] duration metric: took 8.156ms waiting for restarted kubelet to initialise ...
	I0415 18:04:23.863406    7372 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 18:04:23.863406    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods
	I0415 18:04:23.863406    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:23.863590    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:23.863590    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:23.868454    7372 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 18:04:23.869270    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:23.869270    7372 round_trippers.go:580]     Audit-Id: a7afe2ba-850f-4b9e-8492-9e152c5fd94f
	I0415 18:04:23.869270    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:23.869270    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:23.869270    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:23.869270    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:23.869340    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:23 GMT
	I0415 18:04:23.871338    7372 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"597"},"items":[{"metadata":{"name":"coredns-76f75df574-sd42f","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"a05305e5-a9c7-4bee-9329-bc4608f0f7b8","resourceVersion":"594","creationTimestamp":"2024-04-15T18:01:49Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"4805f3d9-11d6-4b14-98bc-8dddffc85ef5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4805f3d9-11d6-4b14-98bc-8dddffc85ef5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51258 chars]
	I0415 18:04:23.873595    7372 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-sd42f" in "kube-system" namespace to be "Ready" ...
	I0415 18:04:23.874253    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-sd42f
	I0415 18:04:23.874253    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:23.874253    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:23.874253    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:23.876857    7372 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 18:04:23.876857    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:23.876857    7372 round_trippers.go:580]     Audit-Id: d28217aa-93ad-45f0-b477-3019c03af646
	I0415 18:04:23.876857    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:23.876857    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:23.876857    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:23.876857    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:23.876857    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:23 GMT
	I0415 18:04:23.877831    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-sd42f","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"a05305e5-a9c7-4bee-9329-bc4608f0f7b8","resourceVersion":"594","creationTimestamp":"2024-04-15T18:01:49Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"4805f3d9-11d6-4b14-98bc-8dddffc85ef5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4805f3d9-11d6-4b14-98bc-8dddffc85ef5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6677 chars]
	I0415 18:04:23.878821    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:23.878821    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:23.878821    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:23.878821    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:23.881823    7372 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:04:23.881823    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:23.881823    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:23.881823    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:23.881823    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:23.881823    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:23.881823    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:23 GMT
	I0415 18:04:23.881823    7372 round_trippers.go:580]     Audit-Id: aaaf24bf-c50d-4381-b31a-b190812c58fe
	I0415 18:04:23.881823    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:24.380775    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-sd42f
	I0415 18:04:24.380924    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:24.380983    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:24.380983    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:24.384594    7372 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:04:24.384594    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:24.384594    7372 round_trippers.go:580]     Audit-Id: 61c09052-390e-4c8f-93cf-95cf8b4a4ee5
	I0415 18:04:24.384594    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:24.384594    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:24.384594    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:24.385531    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:24.385531    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:24 GMT
	I0415 18:04:24.386247    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-sd42f","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"a05305e5-a9c7-4bee-9329-bc4608f0f7b8","resourceVersion":"594","creationTimestamp":"2024-04-15T18:01:49Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"4805f3d9-11d6-4b14-98bc-8dddffc85ef5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4805f3d9-11d6-4b14-98bc-8dddffc85ef5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6677 chars]
	I0415 18:04:24.387015    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:24.387239    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:24.387239    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:24.387290    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:24.392719    7372 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 18:04:24.392882    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:24.392882    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:24.392882    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:24.392882    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:24.392882    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:24 GMT
	I0415 18:04:24.392882    7372 round_trippers.go:580]     Audit-Id: 33d64884-2b83-4195-ae17-9a08ab7f8e03
	I0415 18:04:24.393038    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:24.393076    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:24.875407    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-sd42f
	I0415 18:04:24.875407    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:24.875407    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:24.875407    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:24.879649    7372 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 18:04:24.879718    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:24.879718    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:24.879718    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:24.879718    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:24.879718    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:24.879718    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:24 GMT
	I0415 18:04:24.879718    7372 round_trippers.go:580]     Audit-Id: f7d2054b-f1d5-4273-abb9-f78deec083a3
	I0415 18:04:24.879810    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-sd42f","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"a05305e5-a9c7-4bee-9329-bc4608f0f7b8","resourceVersion":"594","creationTimestamp":"2024-04-15T18:01:49Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"4805f3d9-11d6-4b14-98bc-8dddffc85ef5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4805f3d9-11d6-4b14-98bc-8dddffc85ef5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6677 chars]
	I0415 18:04:24.881057    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:24.881117    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:24.881117    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:24.881117    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:24.883971    7372 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 18:04:24.883971    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:24.883971    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:24 GMT
	I0415 18:04:24.883971    7372 round_trippers.go:580]     Audit-Id: e55ba926-a9bc-4873-aeb6-62f9901dd759
	I0415 18:04:24.883971    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:24.884559    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:24.884559    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:24.884559    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:24.884739    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:25.375001    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-sd42f
	I0415 18:04:25.375001    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:25.375078    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:25.375078    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:25.379627    7372 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 18:04:25.379627    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:25.379627    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:25.379627    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:25.379627    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:25 GMT
	I0415 18:04:25.379627    7372 round_trippers.go:580]     Audit-Id: 9d8eca41-d37a-4283-b4c5-2e88ebdd1e80
	I0415 18:04:25.379627    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:25.379627    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:25.379627    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-sd42f","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"a05305e5-a9c7-4bee-9329-bc4608f0f7b8","resourceVersion":"594","creationTimestamp":"2024-04-15T18:01:49Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"4805f3d9-11d6-4b14-98bc-8dddffc85ef5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4805f3d9-11d6-4b14-98bc-8dddffc85ef5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6677 chars]
	I0415 18:04:25.380987    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:25.380987    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:25.380987    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:25.380987    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:25.385232    7372 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 18:04:25.385232    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:25.385232    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:25 GMT
	I0415 18:04:25.385232    7372 round_trippers.go:580]     Audit-Id: 12fef0eb-abd3-4b83-a3f5-b89427a0a803
	I0415 18:04:25.385373    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:25.385373    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:25.385373    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:25.385437    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:25.385852    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:25.875187    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-sd42f
	I0415 18:04:25.875247    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:25.875247    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:25.875247    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:25.883379    7372 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0415 18:04:25.883379    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:25.883379    7372 round_trippers.go:580]     Audit-Id: ace38675-f61d-46cd-81e0-d34d17deca1c
	I0415 18:04:25.883379    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:25.883379    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:25.883379    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:25.883379    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:25.883379    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:25 GMT
	I0415 18:04:25.883379    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-sd42f","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"a05305e5-a9c7-4bee-9329-bc4608f0f7b8","resourceVersion":"594","creationTimestamp":"2024-04-15T18:01:49Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"4805f3d9-11d6-4b14-98bc-8dddffc85ef5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4805f3d9-11d6-4b14-98bc-8dddffc85ef5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6677 chars]
	I0415 18:04:25.884644    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:25.884644    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:25.884644    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:25.884644    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:25.889502    7372 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 18:04:25.889502    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:25.889502    7372 round_trippers.go:580]     Audit-Id: 805d5fe1-ce7d-452f-b8d4-3eba32f841e2
	I0415 18:04:25.889502    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:25.889502    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:25.889502    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:25.889502    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:25.889502    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:25 GMT
	I0415 18:04:25.890427    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:25.890767    7372 pod_ready.go:102] pod "coredns-76f75df574-sd42f" in "kube-system" namespace has status "Ready":"False"
	I0415 18:04:26.378959    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-sd42f
	I0415 18:04:26.378959    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:26.378959    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:26.378959    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:26.383674    7372 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 18:04:26.383674    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:26.383674    7372 round_trippers.go:580]     Audit-Id: 3d97e2ae-dc85-4211-bcdf-1a8c9cca6dec
	I0415 18:04:26.383674    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:26.383674    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:26.383674    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:26.383991    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:26.383991    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:26 GMT
	I0415 18:04:26.384060    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-sd42f","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"a05305e5-a9c7-4bee-9329-bc4608f0f7b8","resourceVersion":"594","creationTimestamp":"2024-04-15T18:01:49Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"4805f3d9-11d6-4b14-98bc-8dddffc85ef5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4805f3d9-11d6-4b14-98bc-8dddffc85ef5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6677 chars]
	I0415 18:04:26.384685    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:26.384685    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:26.384685    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:26.384685    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:26.388136    7372 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:04:26.388443    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:26.388479    7372 round_trippers.go:580]     Audit-Id: 09a49fe5-3ac9-4510-8ade-0c4921d28ef0
	I0415 18:04:26.388479    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:26.388479    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:26.388479    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:26.388479    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:26.388479    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:26 GMT
	I0415 18:04:26.389554    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:26.879135    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-sd42f
	I0415 18:04:26.879248    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:26.879248    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:26.879248    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:26.883097    7372 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:04:26.883097    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:26.883757    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:26.883757    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:26.883835    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:26.883876    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:26.883876    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:26 GMT
	I0415 18:04:26.883876    7372 round_trippers.go:580]     Audit-Id: bc50af78-d33d-494e-bc59-f9c98c654e8e
	I0415 18:04:26.884038    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-sd42f","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"a05305e5-a9c7-4bee-9329-bc4608f0f7b8","resourceVersion":"594","creationTimestamp":"2024-04-15T18:01:49Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"4805f3d9-11d6-4b14-98bc-8dddffc85ef5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4805f3d9-11d6-4b14-98bc-8dddffc85ef5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6677 chars]
	I0415 18:04:26.884958    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:26.884958    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:26.885050    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:26.885050    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:26.888818    7372 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:04:26.888981    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:26.888981    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:26 GMT
	I0415 18:04:26.888981    7372 round_trippers.go:580]     Audit-Id: 23d389bb-23bf-4fd9-b4ed-2eb0a546bf74
	I0415 18:04:26.888981    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:26.889109    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:26.889194    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:26.889267    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:26.889381    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:27.382487    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-sd42f
	I0415 18:04:27.382542    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:27.382542    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:27.382542    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:27.387174    7372 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 18:04:27.387174    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:27.387911    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:27.387911    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:27.387911    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:27.387911    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:27.387911    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:27 GMT
	I0415 18:04:27.387911    7372 round_trippers.go:580]     Audit-Id: fc712dfb-1736-41f9-9335-5f08cc67ab29
	I0415 18:04:27.388303    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-sd42f","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"a05305e5-a9c7-4bee-9329-bc4608f0f7b8","resourceVersion":"594","creationTimestamp":"2024-04-15T18:01:49Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"4805f3d9-11d6-4b14-98bc-8dddffc85ef5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4805f3d9-11d6-4b14-98bc-8dddffc85ef5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6677 chars]
	I0415 18:04:27.389188    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:27.389222    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:27.389244    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:27.389244    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:27.392597    7372 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:04:27.392597    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:27.393296    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:27 GMT
	I0415 18:04:27.393296    7372 round_trippers.go:580]     Audit-Id: 679cf50e-af5b-4448-9afb-ead3644e9a72
	I0415 18:04:27.393296    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:27.393296    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:27.393296    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:27.393296    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:27.393922    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:27.887103    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-sd42f
	I0415 18:04:27.887103    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:27.887103    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:27.887103    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:27.891694    7372 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 18:04:27.891694    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:27.891694    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:27.891694    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:27.891694    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:27.892539    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:27.892539    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:27 GMT
	I0415 18:04:27.892539    7372 round_trippers.go:580]     Audit-Id: 2f0ec7d3-a4b8-44c9-bbd3-a4c09805c5b5
	I0415 18:04:27.892609    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-sd42f","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"a05305e5-a9c7-4bee-9329-bc4608f0f7b8","resourceVersion":"594","creationTimestamp":"2024-04-15T18:01:49Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"4805f3d9-11d6-4b14-98bc-8dddffc85ef5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4805f3d9-11d6-4b14-98bc-8dddffc85ef5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6677 chars]
	I0415 18:04:27.893602    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:27.893602    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:27.893602    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:27.893602    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:27.898394    7372 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 18:04:27.898515    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:27.898515    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:27.898515    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:27 GMT
	I0415 18:04:27.898515    7372 round_trippers.go:580]     Audit-Id: 14c681d0-bc1c-4a5f-84a8-e0547abc0f0d
	I0415 18:04:27.898515    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:27.898515    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:27.898515    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:27.898694    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:27.899461    7372 pod_ready.go:102] pod "coredns-76f75df574-sd42f" in "kube-system" namespace has status "Ready":"False"
	I0415 18:04:28.385407    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-sd42f
	I0415 18:04:28.385407    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:28.385407    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:28.385407    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:28.390879    7372 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 18:04:28.390994    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:28.390994    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:28.390994    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:28.391035    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:28 GMT
	I0415 18:04:28.391035    7372 round_trippers.go:580]     Audit-Id: 271572a6-7fa5-4a42-8322-6d5b72e10c20
	I0415 18:04:28.391035    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:28.391076    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:28.391229    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-sd42f","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"a05305e5-a9c7-4bee-9329-bc4608f0f7b8","resourceVersion":"594","creationTimestamp":"2024-04-15T18:01:49Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"4805f3d9-11d6-4b14-98bc-8dddffc85ef5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4805f3d9-11d6-4b14-98bc-8dddffc85ef5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6677 chars]
	I0415 18:04:28.392042    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:28.392042    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:28.392042    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:28.392042    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:28.396290    7372 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 18:04:28.396290    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:28.396522    7372 round_trippers.go:580]     Audit-Id: 5da3d68f-15b9-4d31-ac7c-70bae00133cf
	I0415 18:04:28.396522    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:28.396522    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:28.396522    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:28.396522    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:28.396522    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:28 GMT
	I0415 18:04:28.396855    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:28.884339    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-sd42f
	I0415 18:04:28.884612    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:28.884612    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:28.884612    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:28.889515    7372 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 18:04:28.889515    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:28.889515    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:28.889747    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:28.889747    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:28.889747    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:28 GMT
	I0415 18:04:28.889747    7372 round_trippers.go:580]     Audit-Id: 419b5e33-5fc2-4b44-97dd-ef195369df69
	I0415 18:04:28.889747    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:28.890135    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-sd42f","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"a05305e5-a9c7-4bee-9329-bc4608f0f7b8","resourceVersion":"598","creationTimestamp":"2024-04-15T18:01:49Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"4805f3d9-11d6-4b14-98bc-8dddffc85ef5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4805f3d9-11d6-4b14-98bc-8dddffc85ef5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6448 chars]
	I0415 18:04:28.891312    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:28.891418    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:28.891418    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:28.891418    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:28.894623    7372 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:04:28.894623    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:28.894623    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:28.894623    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:28.894623    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:28 GMT
	I0415 18:04:28.894623    7372 round_trippers.go:580]     Audit-Id: 58f275df-2c1e-4fbe-ae31-a051a8b9dd37
	I0415 18:04:28.894623    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:28.894623    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:28.895513    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:28.896130    7372 pod_ready.go:92] pod "coredns-76f75df574-sd42f" in "kube-system" namespace has status "Ready":"True"
	I0415 18:04:28.896130    7372 pod_ready.go:81] duration metric: took 5.021938s for pod "coredns-76f75df574-sd42f" in "kube-system" namespace to be "Ready" ...
	I0415 18:04:28.896130    7372 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-831100" in "kube-system" namespace to be "Ready" ...
	I0415 18:04:28.896130    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/etcd-functional-831100
	I0415 18:04:28.896130    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:28.896130    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:28.896130    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:28.900013    7372 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:04:28.900013    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:28.900013    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:28.900013    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:28.900013    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:28.900013    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:28.900013    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:28 GMT
	I0415 18:04:28.900013    7372 round_trippers.go:580]     Audit-Id: 4ae5ac4e-370c-4582-a805-53ba24bc4a6b
	I0415 18:04:28.900013    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-831100","namespace":"kube-system","uid":"0151e2e9-8814-43eb-91a8-33221f5e6293","resourceVersion":"581","creationTimestamp":"2024-04-15T18:01:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.62.76:2379","kubernetes.io/config.hash":"86fbc658d61c8213444bfd2b96916082","kubernetes.io/config.mirror":"86fbc658d61c8213444bfd2b96916082","kubernetes.io/config.seen":"2024-04-15T18:01:35.845954198Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6586 chars]
	I0415 18:04:28.901412    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:28.901499    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:28.901499    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:28.901552    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:28.904152    7372 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 18:04:28.904152    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:28.904152    7372 round_trippers.go:580]     Audit-Id: fb126c93-d75c-4d88-8523-b620bc6c5350
	I0415 18:04:28.904152    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:28.904152    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:28.904152    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:28.904152    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:28.904152    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:28 GMT
	I0415 18:04:28.904801    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:29.400779    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/etcd-functional-831100
	I0415 18:04:29.400779    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:29.400779    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:29.400779    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:29.404438    7372 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:04:29.404438    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:29.404438    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:29.405042    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:29 GMT
	I0415 18:04:29.405042    7372 round_trippers.go:580]     Audit-Id: 65c8e6e9-1200-4a0e-ab09-9c9d17fd9597
	I0415 18:04:29.405042    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:29.405042    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:29.405042    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:29.405217    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-831100","namespace":"kube-system","uid":"0151e2e9-8814-43eb-91a8-33221f5e6293","resourceVersion":"581","creationTimestamp":"2024-04-15T18:01:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.62.76:2379","kubernetes.io/config.hash":"86fbc658d61c8213444bfd2b96916082","kubernetes.io/config.mirror":"86fbc658d61c8213444bfd2b96916082","kubernetes.io/config.seen":"2024-04-15T18:01:35.845954198Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6586 chars]
	I0415 18:04:29.405491    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:29.405491    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:29.405491    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:29.405491    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:29.409228    7372 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:04:29.409228    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:29.409228    7372 round_trippers.go:580]     Audit-Id: a0791805-ba82-459e-aa5b-a16c5fd3b3a2
	I0415 18:04:29.409228    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:29.409228    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:29.409701    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:29.409701    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:29.409701    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:29 GMT
	I0415 18:04:29.410040    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:29.901036    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/etcd-functional-831100
	I0415 18:04:29.901036    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:29.901036    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:29.901036    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:29.905636    7372 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 18:04:29.905636    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:29.905636    7372 round_trippers.go:580]     Audit-Id: 56d12a12-e22c-4a3b-9b1b-765c652369f1
	I0415 18:04:29.905636    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:29.905636    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:29.905636    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:29.905636    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:29.905636    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:29 GMT
	I0415 18:04:29.906030    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-831100","namespace":"kube-system","uid":"0151e2e9-8814-43eb-91a8-33221f5e6293","resourceVersion":"581","creationTimestamp":"2024-04-15T18:01:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.62.76:2379","kubernetes.io/config.hash":"86fbc658d61c8213444bfd2b96916082","kubernetes.io/config.mirror":"86fbc658d61c8213444bfd2b96916082","kubernetes.io/config.seen":"2024-04-15T18:01:35.845954198Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6586 chars]
	I0415 18:04:29.906895    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:29.906968    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:29.906968    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:29.906968    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:29.909989    7372 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 18:04:29.910275    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:29.910275    7372 round_trippers.go:580]     Audit-Id: ea091ce0-25de-409c-9db2-31e469155177
	I0415 18:04:29.910275    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:29.910355    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:29.910355    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:29.910355    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:29.910355    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:29 GMT
	I0415 18:04:29.910411    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:30.401316    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/etcd-functional-831100
	I0415 18:04:30.401316    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:30.401316    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:30.401316    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:30.405899    7372 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 18:04:30.406071    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:30.406071    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:30.406071    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:30 GMT
	I0415 18:04:30.406071    7372 round_trippers.go:580]     Audit-Id: fb218d77-a471-4e1c-a149-d24b9cdc6fdb
	I0415 18:04:30.406071    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:30.406071    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:30.406141    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:30.406435    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-831100","namespace":"kube-system","uid":"0151e2e9-8814-43eb-91a8-33221f5e6293","resourceVersion":"581","creationTimestamp":"2024-04-15T18:01:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.62.76:2379","kubernetes.io/config.hash":"86fbc658d61c8213444bfd2b96916082","kubernetes.io/config.mirror":"86fbc658d61c8213444bfd2b96916082","kubernetes.io/config.seen":"2024-04-15T18:01:35.845954198Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6586 chars]
	I0415 18:04:30.407288    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:30.407346    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:30.407346    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:30.407346    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:30.411079    7372 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:04:30.411079    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:30.411079    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:30.411642    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:30.411900    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:30.412087    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:30 GMT
	I0415 18:04:30.412224    7372 round_trippers.go:580]     Audit-Id: 619e3481-e29e-4d85-88b1-06e417860d42
	I0415 18:04:30.412265    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:30.412737    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:30.899601    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/etcd-functional-831100
	I0415 18:04:30.899601    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:30.899601    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:30.899601    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:30.904187    7372 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 18:04:30.904187    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:30.904187    7372 round_trippers.go:580]     Audit-Id: f58c14af-1a67-4868-a56f-bdf68ada4547
	I0415 18:04:30.904187    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:30.904187    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:30.904187    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:30.904187    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:30.904187    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:30 GMT
	I0415 18:04:30.904187    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-831100","namespace":"kube-system","uid":"0151e2e9-8814-43eb-91a8-33221f5e6293","resourceVersion":"581","creationTimestamp":"2024-04-15T18:01:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.62.76:2379","kubernetes.io/config.hash":"86fbc658d61c8213444bfd2b96916082","kubernetes.io/config.mirror":"86fbc658d61c8213444bfd2b96916082","kubernetes.io/config.seen":"2024-04-15T18:01:35.845954198Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6586 chars]
	I0415 18:04:30.905509    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:30.905581    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:30.905581    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:30.905581    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:30.909137    7372 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:04:30.909137    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:30.909137    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:30.909137    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:30.909401    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:30 GMT
	I0415 18:04:30.909401    7372 round_trippers.go:580]     Audit-Id: b1f6e281-87ac-4f8a-b7c3-d9808e5bea0a
	I0415 18:04:30.909465    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:30.909504    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:30.909559    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:30.910297    7372 pod_ready.go:102] pod "etcd-functional-831100" in "kube-system" namespace has status "Ready":"False"
	I0415 18:04:31.401709    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/etcd-functional-831100
	I0415 18:04:31.401777    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:31.401777    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:31.401777    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:31.405556    7372 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:04:31.406634    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:31.406634    7372 round_trippers.go:580]     Audit-Id: 5e05181d-35d5-44cc-a03a-6e768f3ee963
	I0415 18:04:31.406634    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:31.406634    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:31.406634    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:31.406634    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:31.406634    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:31 GMT
	I0415 18:04:31.407153    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-831100","namespace":"kube-system","uid":"0151e2e9-8814-43eb-91a8-33221f5e6293","resourceVersion":"581","creationTimestamp":"2024-04-15T18:01:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.62.76:2379","kubernetes.io/config.hash":"86fbc658d61c8213444bfd2b96916082","kubernetes.io/config.mirror":"86fbc658d61c8213444bfd2b96916082","kubernetes.io/config.seen":"2024-04-15T18:01:35.845954198Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6586 chars]
	I0415 18:04:31.407351    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:31.407351    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:31.407351    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:31.407351    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:31.411142    7372 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:04:31.411309    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:31.411309    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:31 GMT
	I0415 18:04:31.411309    7372 round_trippers.go:580]     Audit-Id: 651ed101-d3d7-44bb-9821-effe6c5e955b
	I0415 18:04:31.411309    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:31.411309    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:31.411309    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:31.411309    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:31.411598    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:31.898053    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/etcd-functional-831100
	I0415 18:04:31.898053    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:31.898053    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:31.898053    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:31.901663    7372 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:04:31.902427    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:31.902427    7372 round_trippers.go:580]     Audit-Id: 98695133-749b-4613-a76e-496f41f8fe06
	I0415 18:04:31.902427    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:31.902494    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:31.902494    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:31.902494    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:31.902494    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:31 GMT
	I0415 18:04:31.903081    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-831100","namespace":"kube-system","uid":"0151e2e9-8814-43eb-91a8-33221f5e6293","resourceVersion":"601","creationTimestamp":"2024-04-15T18:01:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.62.76:2379","kubernetes.io/config.hash":"86fbc658d61c8213444bfd2b96916082","kubernetes.io/config.mirror":"86fbc658d61c8213444bfd2b96916082","kubernetes.io/config.seen":"2024-04-15T18:01:35.845954198Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6362 chars]
	I0415 18:04:31.903973    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:31.904060    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:31.904060    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:31.904060    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:31.906502    7372 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 18:04:31.907447    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:31.907486    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:31.907486    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:31.907486    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:31.907570    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:31.907570    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:31 GMT
	I0415 18:04:31.907570    7372 round_trippers.go:580]     Audit-Id: 26d6b566-91fb-422a-8e63-0134efc5402e
	I0415 18:04:31.907933    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:31.908290    7372 pod_ready.go:92] pod "etcd-functional-831100" in "kube-system" namespace has status "Ready":"True"
	I0415 18:04:31.908290    7372 pod_ready.go:81] duration metric: took 3.0121365s for pod "etcd-functional-831100" in "kube-system" namespace to be "Ready" ...
	I0415 18:04:31.908290    7372 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-831100" in "kube-system" namespace to be "Ready" ...
	I0415 18:04:31.908290    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-831100
	I0415 18:04:31.908290    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:31.908290    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:31.908290    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:31.912119    7372 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:04:31.912119    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:31.912119    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:31.912119    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:31.912119    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:31 GMT
	I0415 18:04:31.912119    7372 round_trippers.go:580]     Audit-Id: 94519e8a-132f-4026-96c2-abcac7d236f1
	I0415 18:04:31.912119    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:31.912119    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:31.913462    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-831100","namespace":"kube-system","uid":"3917e8a9-aeeb-4bec-9d3e-01855f643c6b","resourceVersion":"578","creationTimestamp":"2024-04-15T18:01:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.62.76:8441","kubernetes.io/config.hash":"7edcd45f098301eedfa7f486a9ad4987","kubernetes.io/config.mirror":"7edcd45f098301eedfa7f486a9ad4987","kubernetes.io/config.seen":"2024-04-15T18:01:35.845959098Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8138 chars]
	I0415 18:04:31.914104    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:31.914104    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:31.914104    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:31.914214    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:31.916374    7372 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 18:04:31.916374    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:31.917412    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:31.917412    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:31 GMT
	I0415 18:04:31.917412    7372 round_trippers.go:580]     Audit-Id: f2ab402f-2be7-4f22-b72f-60f5c49ef50c
	I0415 18:04:31.917412    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:31.917412    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:31.917412    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:31.917412    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:32.409862    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-831100
	I0415 18:04:32.409862    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:32.409862    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:32.409862    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:32.414176    7372 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 18:04:32.414176    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:32.414176    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:32.414747    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:32 GMT
	I0415 18:04:32.414747    7372 round_trippers.go:580]     Audit-Id: 2abc0f0e-5f66-405e-b296-18336ab574ec
	I0415 18:04:32.414747    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:32.414747    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:32.414747    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:32.415072    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-831100","namespace":"kube-system","uid":"3917e8a9-aeeb-4bec-9d3e-01855f643c6b","resourceVersion":"578","creationTimestamp":"2024-04-15T18:01:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.62.76:8441","kubernetes.io/config.hash":"7edcd45f098301eedfa7f486a9ad4987","kubernetes.io/config.mirror":"7edcd45f098301eedfa7f486a9ad4987","kubernetes.io/config.seen":"2024-04-15T18:01:35.845959098Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8138 chars]
	I0415 18:04:32.415538    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:32.415538    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:32.415538    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:32.415538    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:32.420937    7372 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 18:04:32.420937    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:32.420937    7372 round_trippers.go:580]     Audit-Id: 743b3b86-da82-4698-8f0a-d3e49735d1b4
	I0415 18:04:32.420937    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:32.420937    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:32.420937    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:32.420937    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:32.420937    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:32 GMT
	I0415 18:04:32.421586    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:32.909923    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-831100
	I0415 18:04:32.909991    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:32.909991    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:32.909991    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:32.914878    7372 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 18:04:32.915678    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:32.915734    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:32.915734    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:32.915734    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:32 GMT
	I0415 18:04:32.915734    7372 round_trippers.go:580]     Audit-Id: 039c05ad-89b4-4bec-bc63-2cde070df736
	I0415 18:04:32.915734    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:32.915734    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:32.916267    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-831100","namespace":"kube-system","uid":"3917e8a9-aeeb-4bec-9d3e-01855f643c6b","resourceVersion":"578","creationTimestamp":"2024-04-15T18:01:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.62.76:8441","kubernetes.io/config.hash":"7edcd45f098301eedfa7f486a9ad4987","kubernetes.io/config.mirror":"7edcd45f098301eedfa7f486a9ad4987","kubernetes.io/config.seen":"2024-04-15T18:01:35.845959098Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8138 chars]
	I0415 18:04:32.917204    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:32.917296    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:32.917296    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:32.917296    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:32.920360    7372 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:04:32.920899    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:32.920899    7372 round_trippers.go:580]     Audit-Id: 420febb0-fe52-4fab-8962-c074a2d444b0
	I0415 18:04:32.920973    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:32.920973    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:32.920973    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:32.920973    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:32.920973    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:32 GMT
	I0415 18:04:32.921142    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:33.411527    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-831100
	I0415 18:04:33.411527    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:33.411527    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:33.411527    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:33.415095    7372 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:04:33.415095    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:33.415961    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:33.415961    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:33.415961    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:33.415961    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:33.415961    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:33 GMT
	I0415 18:04:33.416029    7372 round_trippers.go:580]     Audit-Id: 5ffa63c7-a918-499b-8f64-fb5d070a5c0a
	I0415 18:04:33.416319    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-831100","namespace":"kube-system","uid":"3917e8a9-aeeb-4bec-9d3e-01855f643c6b","resourceVersion":"578","creationTimestamp":"2024-04-15T18:01:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.62.76:8441","kubernetes.io/config.hash":"7edcd45f098301eedfa7f486a9ad4987","kubernetes.io/config.mirror":"7edcd45f098301eedfa7f486a9ad4987","kubernetes.io/config.seen":"2024-04-15T18:01:35.845959098Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8138 chars]
	I0415 18:04:33.417687    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:33.417687    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:33.417755    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:33.417755    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:33.419891    7372 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 18:04:33.420951    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:33.420971    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:33.420971    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:33 GMT
	I0415 18:04:33.420971    7372 round_trippers.go:580]     Audit-Id: 7b3df48b-930d-4c20-b0bd-b3938b19f811
	I0415 18:04:33.420971    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:33.420971    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:33.420971    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:33.421952    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:33.911430    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-831100
	I0415 18:04:33.911484    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:33.911484    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:33.911484    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:33.916055    7372 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 18:04:33.916055    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:33.916055    7372 round_trippers.go:580]     Audit-Id: ad10b569-d0ad-4b2c-bfbe-55971756ceed
	I0415 18:04:33.916055    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:33.916055    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:33.916055    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:33.916055    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:33.916191    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:33 GMT
	I0415 18:04:33.916466    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-831100","namespace":"kube-system","uid":"3917e8a9-aeeb-4bec-9d3e-01855f643c6b","resourceVersion":"578","creationTimestamp":"2024-04-15T18:01:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.62.76:8441","kubernetes.io/config.hash":"7edcd45f098301eedfa7f486a9ad4987","kubernetes.io/config.mirror":"7edcd45f098301eedfa7f486a9ad4987","kubernetes.io/config.seen":"2024-04-15T18:01:35.845959098Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8138 chars]
	I0415 18:04:33.917770    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:33.917851    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:33.917851    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:33.917851    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:33.920484    7372 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 18:04:33.920484    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:33.920484    7372 round_trippers.go:580]     Audit-Id: 4e970c50-5503-484f-aa7c-cfc7685c28cf
	I0415 18:04:33.920484    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:33.920484    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:33.920484    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:33.920484    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:33.920484    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:33 GMT
	I0415 18:04:33.921720    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:33.922309    7372 pod_ready.go:102] pod "kube-apiserver-functional-831100" in "kube-system" namespace has status "Ready":"False"
	I0415 18:04:34.415647    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-831100
	I0415 18:04:34.415647    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:34.415647    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:34.415647    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:34.424653    7372 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0415 18:04:34.424801    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:34.424801    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:34.424801    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:34.424801    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:34.424801    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:34.424801    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:34 GMT
	I0415 18:04:34.424801    7372 round_trippers.go:580]     Audit-Id: e296fc06-8c1f-4701-96fa-0a3fd2fb9cbb
	I0415 18:04:34.425155    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-831100","namespace":"kube-system","uid":"3917e8a9-aeeb-4bec-9d3e-01855f643c6b","resourceVersion":"578","creationTimestamp":"2024-04-15T18:01:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.62.76:8441","kubernetes.io/config.hash":"7edcd45f098301eedfa7f486a9ad4987","kubernetes.io/config.mirror":"7edcd45f098301eedfa7f486a9ad4987","kubernetes.io/config.seen":"2024-04-15T18:01:35.845959098Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8138 chars]
	I0415 18:04:34.425844    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:34.425844    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:34.425844    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:34.425844    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:34.428845    7372 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:04:34.428845    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:34.428845    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:34.428845    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:34.428845    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:34 GMT
	I0415 18:04:34.428845    7372 round_trippers.go:580]     Audit-Id: 1c1a8af9-0a24-48ea-9107-c0de9d7050da
	I0415 18:04:34.428845    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:34.428845    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:34.428845    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:34.917389    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-831100
	I0415 18:04:34.917389    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:34.917570    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:34.917570    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:34.921856    7372 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 18:04:34.921856    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:34.921856    7372 round_trippers.go:580]     Audit-Id: 83255113-7932-49b1-af73-a94a3966e329
	I0415 18:04:34.921856    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:34.921856    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:34.921856    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:34.921856    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:34.921856    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:34 GMT
	I0415 18:04:34.921856    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-831100","namespace":"kube-system","uid":"3917e8a9-aeeb-4bec-9d3e-01855f643c6b","resourceVersion":"578","creationTimestamp":"2024-04-15T18:01:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.62.76:8441","kubernetes.io/config.hash":"7edcd45f098301eedfa7f486a9ad4987","kubernetes.io/config.mirror":"7edcd45f098301eedfa7f486a9ad4987","kubernetes.io/config.seen":"2024-04-15T18:01:35.845959098Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8138 chars]
	I0415 18:04:34.923257    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:34.923316    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:34.923316    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:34.923316    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:34.925543    7372 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 18:04:34.925543    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:34.925543    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:34 GMT
	I0415 18:04:34.925543    7372 round_trippers.go:580]     Audit-Id: 902b4665-9c23-4c44-89f2-d7f87abb1fc3
	I0415 18:04:34.925543    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:34.925543    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:34.925543    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:34.925543    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:34.926736    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:35.419097    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-831100
	I0415 18:04:35.419097    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:35.419097    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:35.419097    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:35.422752    7372 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:04:35.422752    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:35.422752    7372 round_trippers.go:580]     Audit-Id: 83117116-e388-4646-81bc-600413b3befb
	I0415 18:04:35.422752    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:35.423819    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:35.423819    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:35.423819    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:35.423819    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:35 GMT
	I0415 18:04:35.424187    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-831100","namespace":"kube-system","uid":"3917e8a9-aeeb-4bec-9d3e-01855f643c6b","resourceVersion":"611","creationTimestamp":"2024-04-15T18:01:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.62.76:8441","kubernetes.io/config.hash":"7edcd45f098301eedfa7f486a9ad4987","kubernetes.io/config.mirror":"7edcd45f098301eedfa7f486a9ad4987","kubernetes.io/config.seen":"2024-04-15T18:01:35.845959098Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7894 chars]
	I0415 18:04:35.425080    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:35.425080    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:35.425169    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:35.425169    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:35.428697    7372 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:04:35.428697    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:35.429149    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:35.429149    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:35.429149    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:35.429149    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:35 GMT
	I0415 18:04:35.429149    7372 round_trippers.go:580]     Audit-Id: 534037df-acd4-4503-b550-cd29330417f7
	I0415 18:04:35.429149    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:35.429461    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:35.430159    7372 pod_ready.go:92] pod "kube-apiserver-functional-831100" in "kube-system" namespace has status "Ready":"True"
	I0415 18:04:35.430313    7372 pod_ready.go:81] duration metric: took 3.5219951s for pod "kube-apiserver-functional-831100" in "kube-system" namespace to be "Ready" ...
	I0415 18:04:35.430427    7372 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-831100" in "kube-system" namespace to be "Ready" ...
	I0415 18:04:35.430545    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-831100
	I0415 18:04:35.430621    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:35.430621    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:35.430621    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:35.433855    7372 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:04:35.433855    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:35.433855    7372 round_trippers.go:580]     Audit-Id: 35052eb2-46ab-4e06-85ca-f042b485999d
	I0415 18:04:35.433855    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:35.433855    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:35.433855    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:35.433855    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:35.433855    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:35 GMT
	I0415 18:04:35.433855    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-831100","namespace":"kube-system","uid":"67d99219-f151-4281-8f69-ed09b79937d3","resourceVersion":"604","creationTimestamp":"2024-04-15T18:01:34Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c605a9bf8fc1edf145eebd8bc787cc94","kubernetes.io/config.mirror":"c605a9bf8fc1edf145eebd8bc787cc94","kubernetes.io/config.seen":"2024-04-15T18:01:26.523536084Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7467 chars]
	I0415 18:04:35.434892    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:35.434925    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:35.434925    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:35.434925    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:35.438240    7372 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 18:04:35.438268    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:35.438268    7372 round_trippers.go:580]     Audit-Id: 979e75d5-0248-411b-bc3b-b5d6d07a3587
	I0415 18:04:35.438268    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:35.438268    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:35.438268    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:35.438268    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:35.438268    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:35 GMT
	I0415 18:04:35.438268    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:35.439026    7372 pod_ready.go:92] pod "kube-controller-manager-functional-831100" in "kube-system" namespace has status "Ready":"True"
	I0415 18:04:35.439026    7372 pod_ready.go:81] duration metric: took 8.5986ms for pod "kube-controller-manager-functional-831100" in "kube-system" namespace to be "Ready" ...
	I0415 18:04:35.439026    7372 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sfdhl" in "kube-system" namespace to be "Ready" ...
	I0415 18:04:35.439026    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/kube-proxy-sfdhl
	I0415 18:04:35.439026    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:35.439026    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:35.439026    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:35.441766    7372 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 18:04:35.441766    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:35.441766    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:35.441766    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:35.441766    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:35.441766    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:35 GMT
	I0415 18:04:35.441766    7372 round_trippers.go:580]     Audit-Id: ff00c413-4c24-46cc-a596-fc576463d1a8
	I0415 18:04:35.441766    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:35.442760    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sfdhl","generateName":"kube-proxy-","namespace":"kube-system","uid":"e82d2eca-3bbb-407f-9639-db448fa365db","resourceVersion":"593","creationTimestamp":"2024-04-15T18:01:49Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b9307e1f-2e55-4b94-944c-a7b5f8f454bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9307e1f-2e55-4b94-944c-a7b5f8f454bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6030 chars]
	I0415 18:04:35.442760    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:35.442760    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:35.442760    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:35.442760    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:35.446046    7372 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:04:35.446273    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:35.446273    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:35.446273    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:35.446273    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:35.446273    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:35 GMT
	I0415 18:04:35.446273    7372 round_trippers.go:580]     Audit-Id: 553207ef-0fb1-41b8-a229-8878d496be0b
	I0415 18:04:35.446273    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:35.446273    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:35.446961    7372 pod_ready.go:92] pod "kube-proxy-sfdhl" in "kube-system" namespace has status "Ready":"True"
	I0415 18:04:35.447030    7372 pod_ready.go:81] duration metric: took 8.0045ms for pod "kube-proxy-sfdhl" in "kube-system" namespace to be "Ready" ...
	I0415 18:04:35.447030    7372 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-831100" in "kube-system" namespace to be "Ready" ...
	I0415 18:04:35.447178    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-831100
	I0415 18:04:35.447178    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:35.447178    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:35.447178    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:35.449910    7372 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 18:04:35.449910    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:35.449910    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:35.449910    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:35.449910    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:35.449910    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:35.449910    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:35 GMT
	I0415 18:04:35.449910    7372 round_trippers.go:580]     Audit-Id: 3b956623-bca2-4f15-9aa2-e8c227e53b91
	I0415 18:04:35.449910    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-831100","namespace":"kube-system","uid":"fc7f4de2-5606-4f85-b9d6-8947a4e27303","resourceVersion":"603","creationTimestamp":"2024-04-15T18:01:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"913870b73f126e9f9c788c6f62aa0059","kubernetes.io/config.mirror":"913870b73f126e9f9c788c6f62aa0059","kubernetes.io/config.seen":"2024-04-15T18:01:35.845961198Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5197 chars]
	I0415 18:04:35.450839    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:35.450839    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:35.450839    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:35.450839    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:35.457039    7372 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 18:04:35.457039    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:35.457039    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:35.457039    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:35.457039    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:35 GMT
	I0415 18:04:35.457039    7372 round_trippers.go:580]     Audit-Id: d008c0bf-24d4-4df4-a0e3-5232e2ad4956
	I0415 18:04:35.457039    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:35.457039    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:35.457039    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:35.457671    7372 pod_ready.go:92] pod "kube-scheduler-functional-831100" in "kube-system" namespace has status "Ready":"True"
	I0415 18:04:35.457671    7372 pod_ready.go:81] duration metric: took 10.6405ms for pod "kube-scheduler-functional-831100" in "kube-system" namespace to be "Ready" ...
	I0415 18:04:35.457671    7372 pod_ready.go:38] duration metric: took 11.5941732s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 18:04:35.457671    7372 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0415 18:04:35.481280    7372 command_runner.go:130] > -16
	I0415 18:04:35.481390    7372 ops.go:34] apiserver oom_adj: -16
	I0415 18:04:35.481390    7372 kubeadm.go:591] duration metric: took 22.501955s to restartPrimaryControlPlane
	I0415 18:04:35.481390    7372 kubeadm.go:393] duration metric: took 22.5908265s to StartCluster
	I0415 18:04:35.481482    7372 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:04:35.481847    7372 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 18:04:35.483092    7372 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:04:35.484630    7372 start.go:234] Will wait 6m0s for node &{Name: IP:172.19.62.76 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 18:04:35.488091    7372 out.go:177] * Verifying Kubernetes components...
	I0415 18:04:35.484630    7372 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0415 18:04:35.488091    7372 addons.go:69] Setting storage-provisioner=true in profile "functional-831100"
	I0415 18:04:35.491784    7372 addons.go:234] Setting addon storage-provisioner=true in "functional-831100"
	I0415 18:04:35.484630    7372 config.go:182] Loaded profile config "functional-831100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:04:35.488091    7372 addons.go:69] Setting default-storageclass=true in profile "functional-831100"
	W0415 18:04:35.491867    7372 addons.go:243] addon storage-provisioner should already be in state true
	I0415 18:04:35.492000    7372 host.go:66] Checking if "functional-831100" exists ...
	I0415 18:04:35.491867    7372 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-831100"
	I0415 18:04:35.492751    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-831100 ).state
	I0415 18:04:35.493490    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-831100 ).state
	I0415 18:04:35.512755    7372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:04:35.845252    7372 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 18:04:35.875512    7372 node_ready.go:35] waiting up to 6m0s for node "functional-831100" to be "Ready" ...
	I0415 18:04:35.875512    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:35.875512    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:35.875512    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:35.875512    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:35.881205    7372 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 18:04:35.881205    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:35.881205    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:35.881205    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:35 GMT
	I0415 18:04:35.881205    7372 round_trippers.go:580]     Audit-Id: deaf85b4-4ed3-4637-bf82-1cc13aa382da
	I0415 18:04:35.881205    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:35.881205    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:35.881205    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:35.881687    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:35.882453    7372 node_ready.go:49] node "functional-831100" has status "Ready":"True"
	I0415 18:04:35.882453    7372 node_ready.go:38] duration metric: took 6.9409ms for node "functional-831100" to be "Ready" ...
	I0415 18:04:35.882453    7372 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 18:04:35.882453    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods
	I0415 18:04:35.882453    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:35.882453    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:35.882453    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:35.887142    7372 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 18:04:35.888206    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:35.888250    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:35.888323    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:35.888323    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:35.888323    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:35 GMT
	I0415 18:04:35.888323    7372 round_trippers.go:580]     Audit-Id: cdbbe19f-5023-4dfd-8329-a53fc2185e5d
	I0415 18:04:35.888323    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:35.890680    7372 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"611"},"items":[{"metadata":{"name":"coredns-76f75df574-sd42f","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"a05305e5-a9c7-4bee-9329-bc4608f0f7b8","resourceVersion":"598","creationTimestamp":"2024-04-15T18:01:49Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"4805f3d9-11d6-4b14-98bc-8dddffc85ef5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4805f3d9-11d6-4b14-98bc-8dddffc85ef5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50055 chars]
	I0415 18:04:35.894968    7372 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-sd42f" in "kube-system" namespace to be "Ready" ...
	I0415 18:04:35.894968    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-sd42f
	I0415 18:04:35.894968    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:35.894968    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:35.894968    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:35.901633    7372 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 18:04:35.902506    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:35.902506    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:35 GMT
	I0415 18:04:35.902506    7372 round_trippers.go:580]     Audit-Id: 4f7e3986-0fb1-4157-ac86-af04ff470cf9
	I0415 18:04:35.902506    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:35.902506    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:35.902506    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:35.902506    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:35.902833    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-sd42f","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"a05305e5-a9c7-4bee-9329-bc4608f0f7b8","resourceVersion":"598","creationTimestamp":"2024-04-15T18:01:49Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"4805f3d9-11d6-4b14-98bc-8dddffc85ef5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4805f3d9-11d6-4b14-98bc-8dddffc85ef5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6448 chars]
	I0415 18:04:35.903546    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:35.903608    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:35.903608    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:35.903608    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:35.906931    7372 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:04:35.907760    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:35.907760    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:35.907760    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:35.907760    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:35.907760    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:35.907760    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:35 GMT
	I0415 18:04:35.907760    7372 round_trippers.go:580]     Audit-Id: 0137ac9b-bff7-4c77-aee9-4eb3ddb89472
	I0415 18:04:35.908164    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:35.908900    7372 pod_ready.go:92] pod "coredns-76f75df574-sd42f" in "kube-system" namespace has status "Ready":"True"
	I0415 18:04:35.908900    7372 pod_ready.go:81] duration metric: took 13.9321ms for pod "coredns-76f75df574-sd42f" in "kube-system" namespace to be "Ready" ...
	I0415 18:04:35.908998    7372 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-831100" in "kube-system" namespace to be "Ready" ...
	I0415 18:04:36.033008    7372 request.go:629] Waited for 124.0094ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/etcd-functional-831100
	I0415 18:04:36.033008    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/etcd-functional-831100
	I0415 18:04:36.033008    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:36.033008    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:36.033008    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:36.038389    7372 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 18:04:36.038389    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:36.038486    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:36.038527    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:36.038527    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:36.038527    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:36.038527    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:36 GMT
	I0415 18:04:36.038527    7372 round_trippers.go:580]     Audit-Id: f295621c-6785-4032-a882-82ae90dae318
	I0415 18:04:36.038975    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-831100","namespace":"kube-system","uid":"0151e2e9-8814-43eb-91a8-33221f5e6293","resourceVersion":"601","creationTimestamp":"2024-04-15T18:01:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.62.76:2379","kubernetes.io/config.hash":"86fbc658d61c8213444bfd2b96916082","kubernetes.io/config.mirror":"86fbc658d61c8213444bfd2b96916082","kubernetes.io/config.seen":"2024-04-15T18:01:35.845954198Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6362 chars]
	I0415 18:04:36.222666    7372 request.go:629] Waited for 182.3591ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:36.223223    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:36.223223    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:36.223223    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:36.223223    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:36.226907    7372 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:04:36.227722    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:36.228874    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:36.228966    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:36 GMT
	I0415 18:04:36.228966    7372 round_trippers.go:580]     Audit-Id: 5623ede4-50bc-4aaf-a6a8-2ae3a8aec487
	I0415 18:04:36.228966    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:36.228966    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:36.228966    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:36.228966    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:36.229678    7372 pod_ready.go:92] pod "etcd-functional-831100" in "kube-system" namespace has status "Ready":"True"
	I0415 18:04:36.229678    7372 pod_ready.go:81] duration metric: took 320.6775ms for pod "etcd-functional-831100" in "kube-system" namespace to be "Ready" ...
	I0415 18:04:36.229678    7372 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-831100" in "kube-system" namespace to be "Ready" ...
	I0415 18:04:36.428880    7372 request.go:629] Waited for 199.201ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-831100
	I0415 18:04:36.429099    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-831100
	I0415 18:04:36.429099    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:36.429099    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:36.429099    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:36.434083    7372 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 18:04:36.434151    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:36.434151    7372 round_trippers.go:580]     Audit-Id: 3e13d818-5af9-4eaf-a9d8-69055b398c24
	I0415 18:04:36.434151    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:36.434151    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:36.434151    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:36.434151    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:36.434151    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:36 GMT
	I0415 18:04:36.434151    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-831100","namespace":"kube-system","uid":"3917e8a9-aeeb-4bec-9d3e-01855f643c6b","resourceVersion":"611","creationTimestamp":"2024-04-15T18:01:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.62.76:8441","kubernetes.io/config.hash":"7edcd45f098301eedfa7f486a9ad4987","kubernetes.io/config.mirror":"7edcd45f098301eedfa7f486a9ad4987","kubernetes.io/config.seen":"2024-04-15T18:01:35.845959098Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7894 chars]
	I0415 18:04:36.620480    7372 request.go:629] Waited for 185.5913ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:36.620814    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:36.620814    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:36.620814    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:36.620814    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:36.624401    7372 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:04:36.624401    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:36.625340    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:36 GMT
	I0415 18:04:36.625340    7372 round_trippers.go:580]     Audit-Id: 20d6cff8-289f-487c-9926-f9c68c8999fb
	I0415 18:04:36.625340    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:36.625340    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:36.625340    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:36.625387    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:36.625542    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:36.626158    7372 pod_ready.go:92] pod "kube-apiserver-functional-831100" in "kube-system" namespace has status "Ready":"True"
	I0415 18:04:36.626158    7372 pod_ready.go:81] duration metric: took 396.4775ms for pod "kube-apiserver-functional-831100" in "kube-system" namespace to be "Ready" ...
	I0415 18:04:36.626158    7372 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-831100" in "kube-system" namespace to be "Ready" ...
	I0415 18:04:36.826955    7372 request.go:629] Waited for 200.6614ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-831100
	I0415 18:04:36.827063    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-831100
	I0415 18:04:36.827063    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:36.827063    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:36.827063    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:36.831582    7372 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 18:04:36.832163    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:36.832163    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:36.832163    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:36.832163    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:36.832163    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:36 GMT
	I0415 18:04:36.832163    7372 round_trippers.go:580]     Audit-Id: db5d10a6-a0e2-4e1d-9781-48575def8dd0
	I0415 18:04:36.832163    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:36.832477    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-831100","namespace":"kube-system","uid":"67d99219-f151-4281-8f69-ed09b79937d3","resourceVersion":"604","creationTimestamp":"2024-04-15T18:01:34Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c605a9bf8fc1edf145eebd8bc787cc94","kubernetes.io/config.mirror":"c605a9bf8fc1edf145eebd8bc787cc94","kubernetes.io/config.seen":"2024-04-15T18:01:26.523536084Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7467 chars]
	I0415 18:04:37.031763    7372 request.go:629] Waited for 198.5247ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:37.031763    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:37.031763    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:37.032188    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:37.032231    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:37.038634    7372 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 18:04:37.038634    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:37.038634    7372 round_trippers.go:580]     Audit-Id: 76c14110-2008-4ae2-b813-5e3e2c511997
	I0415 18:04:37.038634    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:37.038634    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:37.038634    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:37.038634    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:37.038634    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:37 GMT
	I0415 18:04:37.040421    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:37.041901    7372 pod_ready.go:92] pod "kube-controller-manager-functional-831100" in "kube-system" namespace has status "Ready":"True"
	I0415 18:04:37.042134    7372 pod_ready.go:81] duration metric: took 415.781ms for pod "kube-controller-manager-functional-831100" in "kube-system" namespace to be "Ready" ...
	I0415 18:04:37.042134    7372 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sfdhl" in "kube-system" namespace to be "Ready" ...
	I0415 18:04:37.222051    7372 request.go:629] Waited for 179.7407ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/kube-proxy-sfdhl
	I0415 18:04:37.222354    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/kube-proxy-sfdhl
	I0415 18:04:37.222354    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:37.222354    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:37.222354    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:37.227960    7372 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 18:04:37.227960    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:37.227960    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:37.227960    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:37 GMT
	I0415 18:04:37.227960    7372 round_trippers.go:580]     Audit-Id: 8c68aebd-7b97-45f1-a220-8fb8d488939e
	I0415 18:04:37.227960    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:37.227960    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:37.227960    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:37.228133    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sfdhl","generateName":"kube-proxy-","namespace":"kube-system","uid":"e82d2eca-3bbb-407f-9639-db448fa365db","resourceVersion":"593","creationTimestamp":"2024-04-15T18:01:49Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b9307e1f-2e55-4b94-944c-a7b5f8f454bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9307e1f-2e55-4b94-944c-a7b5f8f454bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6030 chars]
	I0415 18:04:37.428639    7372 request.go:629] Waited for 199.6152ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:37.428880    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:37.428880    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:37.428880    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:37.428880    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:37.433557    7372 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 18:04:37.433646    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:37.433646    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:37.433646    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:37 GMT
	I0415 18:04:37.433646    7372 round_trippers.go:580]     Audit-Id: 4fd061bb-619f-4baf-9bab-cbc8db13136d
	I0415 18:04:37.433646    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:37.433646    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:37.433646    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:37.433893    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:37.433893    7372 pod_ready.go:92] pod "kube-proxy-sfdhl" in "kube-system" namespace has status "Ready":"True"
	I0415 18:04:37.433893    7372 pod_ready.go:81] duration metric: took 391.7564ms for pod "kube-proxy-sfdhl" in "kube-system" namespace to be "Ready" ...
	I0415 18:04:37.434429    7372 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-831100" in "kube-system" namespace to be "Ready" ...
	I0415 18:04:37.620744    7372 request.go:629] Waited for 186.072ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-831100
	I0415 18:04:37.620848    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-831100
	I0415 18:04:37.620848    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:37.620848    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:37.620848    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:37.625507    7372 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 18:04:37.626003    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:37.626003    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:37 GMT
	I0415 18:04:37.626003    7372 round_trippers.go:580]     Audit-Id: 7d2ced12-7a24-4774-80b4-e7d0ba730d85
	I0415 18:04:37.626003    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:37.626003    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:37.626003    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:37.626003    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:37.626239    7372 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-831100","namespace":"kube-system","uid":"fc7f4de2-5606-4f85-b9d6-8947a4e27303","resourceVersion":"603","creationTimestamp":"2024-04-15T18:01:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"913870b73f126e9f9c788c6f62aa0059","kubernetes.io/config.mirror":"913870b73f126e9f9c788c6f62aa0059","kubernetes.io/config.seen":"2024-04-15T18:01:35.845961198Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5197 chars]
	I0415 18:04:37.824427    7372 request.go:629] Waited for 197.4419ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:37.824427    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes/functional-831100
	I0415 18:04:37.824427    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:37.824427    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:37.824427    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:37.829754    7372 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 18:04:37.830488    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:37.830834    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:37.831192    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:37.831192    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:37.831192    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:37 GMT
	I0415 18:04:37.831192    7372 round_trippers.go:580]     Audit-Id: 946e17eb-5330-48f6-a00e-978403caffe7
	I0415 18:04:37.831192    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:37.831192    7372 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T18:01:31Z","fieldsType":"Fie [truncated 4793 chars]
	I0415 18:04:37.831816    7372 pod_ready.go:92] pod "kube-scheduler-functional-831100" in "kube-system" namespace has status "Ready":"True"
	I0415 18:04:37.831816    7372 pod_ready.go:81] duration metric: took 397.3837ms for pod "kube-scheduler-functional-831100" in "kube-system" namespace to be "Ready" ...
	I0415 18:04:37.831816    7372 pod_ready.go:38] duration metric: took 1.9493474s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 18:04:37.831816    7372 api_server.go:52] waiting for apiserver process to appear ...
	I0415 18:04:37.850518    7372 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 18:04:37.886619    7372 command_runner.go:130] > 5237
	I0415 18:04:37.886619    7372 api_server.go:72] duration metric: took 2.4019704s to wait for apiserver process to appear ...
	I0415 18:04:37.886619    7372 api_server.go:88] waiting for apiserver healthz status ...
	I0415 18:04:37.886619    7372 api_server.go:253] Checking apiserver healthz at https://172.19.62.76:8441/healthz ...
	I0415 18:04:37.894053    7372 api_server.go:279] https://172.19.62.76:8441/healthz returned 200:
	ok
	I0415 18:04:37.894614    7372 round_trippers.go:463] GET https://172.19.62.76:8441/version
	I0415 18:04:37.894614    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:37.894614    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:37.894614    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:37.897004    7372 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 18:04:37.897004    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:37.897004    7372 round_trippers.go:580]     Audit-Id: 6b150e06-4679-4946-bee0-a021bbcdb954
	I0415 18:04:37.897004    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:37.897004    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:37.897004    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:37.897004    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:37.897004    7372 round_trippers.go:580]     Content-Length: 263
	I0415 18:04:37.897004    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:37 GMT
	I0415 18:04:37.897004    7372 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "29",
	  "gitVersion": "v1.29.3",
	  "gitCommit": "6813625b7cd706db5bc7388921be03071e1a492d",
	  "gitTreeState": "clean",
	  "buildDate": "2024-03-14T23:58:36Z",
	  "goVersion": "go1.21.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0415 18:04:37.897004    7372 api_server.go:141] control plane version: v1.29.3
	I0415 18:04:37.897004    7372 api_server.go:131] duration metric: took 10.3845ms to wait for apiserver health ...
	I0415 18:04:37.897004    7372 system_pods.go:43] waiting for kube-system pods to appear ...
	I0415 18:04:37.899251    7372 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:04:37.899251    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:04:37.899251    7372 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:04:37.899323    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:04:37.903118    7372 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 18:04:37.900142    7372 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 18:04:37.903361    7372 kapi.go:59] client config for functional-831100: &rest.Config{Host:"https://172.19.62.76:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-831100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-831100\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f71600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0415 18:04:37.905744    7372 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 18:04:37.905744    7372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0415 18:04:37.905744    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-831100 ).state
	I0415 18:04:37.906448    7372 addons.go:234] Setting addon default-storageclass=true in "functional-831100"
	W0415 18:04:37.906448    7372 addons.go:243] addon default-storageclass should already be in state true
	I0415 18:04:37.906511    7372 host.go:66] Checking if "functional-831100" exists ...
	I0415 18:04:37.907303    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-831100 ).state
	I0415 18:04:38.027403    7372 request.go:629] Waited for 130.2264ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods
	I0415 18:04:38.027403    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods
	I0415 18:04:38.027403    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:38.027403    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:38.027403    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:38.035692    7372 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0415 18:04:38.035760    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:38.035760    7372 round_trippers.go:580]     Audit-Id: 07d45c45-d418-48a5-ad6f-411a2f06cbc8
	I0415 18:04:38.035760    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:38.035760    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:38.035760    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:38.035760    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:38.035971    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:38 GMT
	I0415 18:04:38.037471    7372 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"611"},"items":[{"metadata":{"name":"coredns-76f75df574-sd42f","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"a05305e5-a9c7-4bee-9329-bc4608f0f7b8","resourceVersion":"598","creationTimestamp":"2024-04-15T18:01:49Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"4805f3d9-11d6-4b14-98bc-8dddffc85ef5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4805f3d9-11d6-4b14-98bc-8dddffc85ef5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50055 chars]
	I0415 18:04:38.040940    7372 system_pods.go:59] 7 kube-system pods found
	I0415 18:04:38.040940    7372 system_pods.go:61] "coredns-76f75df574-sd42f" [a05305e5-a9c7-4bee-9329-bc4608f0f7b8] Running
	I0415 18:04:38.040940    7372 system_pods.go:61] "etcd-functional-831100" [0151e2e9-8814-43eb-91a8-33221f5e6293] Running
	I0415 18:04:38.040940    7372 system_pods.go:61] "kube-apiserver-functional-831100" [3917e8a9-aeeb-4bec-9d3e-01855f643c6b] Running
	I0415 18:04:38.040940    7372 system_pods.go:61] "kube-controller-manager-functional-831100" [67d99219-f151-4281-8f69-ed09b79937d3] Running
	I0415 18:04:38.040940    7372 system_pods.go:61] "kube-proxy-sfdhl" [e82d2eca-3bbb-407f-9639-db448fa365db] Running
	I0415 18:04:38.040940    7372 system_pods.go:61] "kube-scheduler-functional-831100" [fc7f4de2-5606-4f85-b9d6-8947a4e27303] Running
	I0415 18:04:38.040940    7372 system_pods.go:61] "storage-provisioner" [9494c9a1-8863-43cd-91b2-67524861807c] Running
	I0415 18:04:38.040940    7372 system_pods.go:74] duration metric: took 143.9354ms to wait for pod list to return data ...
	I0415 18:04:38.040940    7372 default_sa.go:34] waiting for default service account to be created ...
	I0415 18:04:38.232856    7372 request.go:629] Waited for 190.9931ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.76:8441/api/v1/namespaces/default/serviceaccounts
	I0415 18:04:38.232945    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/default/serviceaccounts
	I0415 18:04:38.233037    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:38.233037    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:38.233037    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:38.238957    7372 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 18:04:38.239327    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:38.239452    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:38.239452    7372 round_trippers.go:580]     Content-Length: 261
	I0415 18:04:38.239746    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:38 GMT
	I0415 18:04:38.239911    7372 round_trippers.go:580]     Audit-Id: 7e9c2e6d-6fe9-469c-b0a6-88e845b7fa72
	I0415 18:04:38.239911    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:38.239911    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:38.239911    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:38.239911    7372 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"611"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"1597ef92-b37b-4f12-9d61-947d8fa6b622","resourceVersion":"342","creationTimestamp":"2024-04-15T18:01:49Z"}}]}
	I0415 18:04:38.239911    7372 default_sa.go:45] found service account: "default"
	I0415 18:04:38.239911    7372 default_sa.go:55] duration metric: took 198.9693ms for default service account to be created ...
	I0415 18:04:38.239911    7372 system_pods.go:116] waiting for k8s-apps to be running ...
	I0415 18:04:38.422250    7372 request.go:629] Waited for 182.1269ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods
	I0415 18:04:38.422250    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/namespaces/kube-system/pods
	I0415 18:04:38.422250    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:38.422250    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:38.422250    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:38.427526    7372 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 18:04:38.428477    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:38.428477    7372 round_trippers.go:580]     Audit-Id: 23321a9e-a5c0-4f71-a2db-d6c43f588999
	I0415 18:04:38.428544    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:38.428544    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:38.428613    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:38.428639    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:38.428690    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:38 GMT
	I0415 18:04:38.429744    7372 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"611"},"items":[{"metadata":{"name":"coredns-76f75df574-sd42f","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"a05305e5-a9c7-4bee-9329-bc4608f0f7b8","resourceVersion":"598","creationTimestamp":"2024-04-15T18:01:49Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"4805f3d9-11d6-4b14-98bc-8dddffc85ef5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T18:01:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4805f3d9-11d6-4b14-98bc-8dddffc85ef5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50055 chars]
	I0415 18:04:38.433175    7372 system_pods.go:86] 7 kube-system pods found
	I0415 18:04:38.433175    7372 system_pods.go:89] "coredns-76f75df574-sd42f" [a05305e5-a9c7-4bee-9329-bc4608f0f7b8] Running
	I0415 18:04:38.433175    7372 system_pods.go:89] "etcd-functional-831100" [0151e2e9-8814-43eb-91a8-33221f5e6293] Running
	I0415 18:04:38.433734    7372 system_pods.go:89] "kube-apiserver-functional-831100" [3917e8a9-aeeb-4bec-9d3e-01855f643c6b] Running
	I0415 18:04:38.433734    7372 system_pods.go:89] "kube-controller-manager-functional-831100" [67d99219-f151-4281-8f69-ed09b79937d3] Running
	I0415 18:04:38.433734    7372 system_pods.go:89] "kube-proxy-sfdhl" [e82d2eca-3bbb-407f-9639-db448fa365db] Running
	I0415 18:04:38.433734    7372 system_pods.go:89] "kube-scheduler-functional-831100" [fc7f4de2-5606-4f85-b9d6-8947a4e27303] Running
	I0415 18:04:38.433734    7372 system_pods.go:89] "storage-provisioner" [9494c9a1-8863-43cd-91b2-67524861807c] Running
	I0415 18:04:38.433734    7372 system_pods.go:126] duration metric: took 193.8213ms to wait for k8s-apps to be running ...
	I0415 18:04:38.433734    7372 system_svc.go:44] waiting for kubelet service to be running ....
	I0415 18:04:38.448337    7372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 18:04:38.480890    7372 system_svc.go:56] duration metric: took 47.1562ms WaitForService to wait for kubelet
	I0415 18:04:38.480990    7372 kubeadm.go:576] duration metric: took 2.9963372s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 18:04:38.480990    7372 node_conditions.go:102] verifying NodePressure condition ...
	I0415 18:04:38.628138    7372 request.go:629] Waited for 146.4398ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.76:8441/api/v1/nodes
	I0415 18:04:38.628322    7372 round_trippers.go:463] GET https://172.19.62.76:8441/api/v1/nodes
	I0415 18:04:38.628322    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:38.628406    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:38.628406    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:38.633223    7372 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 18:04:38.633223    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:38.633223    7372 round_trippers.go:580]     Audit-Id: 653e108d-c745-47ec-bc65-354b4dd48d02
	I0415 18:04:38.633223    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:38.633223    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:38.633223    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:38.633223    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:38.633223    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:38 GMT
	I0415 18:04:38.634047    7372 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"611"},"items":[{"metadata":{"name":"functional-831100","uid":"90bd7d42-8641-4fed-8284-9d91fb1eb862","resourceVersion":"541","creationTimestamp":"2024-04-15T18:01:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-831100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-831100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T18_01_36_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"m
anagedFields":[{"manager":"kubelet","operation":"Update","apiVersion":" [truncated 4846 chars]
	I0415 18:04:38.634742    7372 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0415 18:04:38.634742    7372 node_conditions.go:123] node cpu capacity is 2
	I0415 18:04:38.634742    7372 node_conditions.go:105] duration metric: took 153.7504ms to run NodePressure ...
	I0415 18:04:38.634742    7372 start.go:240] waiting for startup goroutines ...
	I0415 18:04:40.297836    7372 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:04:40.297836    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:04:40.298803    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-831100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:04:40.308943    7372 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:04:40.309024    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:04:40.309158    7372 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0415 18:04:40.309224    7372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0415 18:04:40.309224    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-831100 ).state
	I0415 18:04:42.672889    7372 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:04:42.672889    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:04:42.672889    7372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-831100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:04:43.136762    7372 main.go:141] libmachine: [stdout =====>] : 172.19.62.76
	
	I0415 18:04:43.137270    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:04:43.138179    7372 sshutil.go:53] new ssh client: &{IP:172.19.62.76 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-831100\id_rsa Username:docker}
	I0415 18:04:43.293486    7372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 18:04:44.189999    7372 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0415 18:04:44.189999    7372 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0415 18:04:44.189999    7372 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0415 18:04:44.189999    7372 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0415 18:04:44.189999    7372 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0415 18:04:44.189999    7372 command_runner.go:130] > pod/storage-provisioner configured
	I0415 18:04:45.404373    7372 main.go:141] libmachine: [stdout =====>] : 172.19.62.76
	
	I0415 18:04:45.404373    7372 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:04:45.405176    7372 sshutil.go:53] new ssh client: &{IP:172.19.62.76 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-831100\id_rsa Username:docker}
	I0415 18:04:45.550127    7372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0415 18:04:45.724953    7372 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0415 18:04:45.725924    7372 round_trippers.go:463] GET https://172.19.62.76:8441/apis/storage.k8s.io/v1/storageclasses
	I0415 18:04:45.725924    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:45.725924    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:45.725924    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:45.729945    7372 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 18:04:45.730325    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:45.730325    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:45.730325    7372 round_trippers.go:580]     Content-Length: 1273
	I0415 18:04:45.730325    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:45 GMT
	I0415 18:04:45.730325    7372 round_trippers.go:580]     Audit-Id: c28f2d6c-0b19-44cb-9769-f52cdcb978aa
	I0415 18:04:45.730325    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:45.730325    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:45.730325    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:45.730325    7372 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"618"},"items":[{"metadata":{"name":"standard","uid":"ea76f982-ccc6-4130-9a46-d10c628c8df0","resourceVersion":"428","creationTimestamp":"2024-04-15T18:02:00Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-15T18:02:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0415 18:04:45.731209    7372 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"ea76f982-ccc6-4130-9a46-d10c628c8df0","resourceVersion":"428","creationTimestamp":"2024-04-15T18:02:00Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-15T18:02:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0415 18:04:45.731345    7372 round_trippers.go:463] PUT https://172.19.62.76:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0415 18:04:45.731345    7372 round_trippers.go:469] Request Headers:
	I0415 18:04:45.731400    7372 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:04:45.731400    7372 round_trippers.go:473]     Content-Type: application/json
	I0415 18:04:45.731400    7372 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:04:45.736703    7372 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 18:04:45.736703    7372 round_trippers.go:577] Response Headers:
	I0415 18:04:45.736703    7372 round_trippers.go:580]     Content-Type: application/json
	I0415 18:04:45.736703    7372 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b8233ddd-1e3d-4e63-9e62-0c2fb411ac63
	I0415 18:04:45.736703    7372 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2210b4e4-d680-49ad-ae5d-c41b910100cd
	I0415 18:04:45.736703    7372 round_trippers.go:580]     Content-Length: 1220
	I0415 18:04:45.736703    7372 round_trippers.go:580]     Date: Mon, 15 Apr 2024 18:04:45 GMT
	I0415 18:04:45.736703    7372 round_trippers.go:580]     Audit-Id: 70dc360d-e331-4edb-ac4d-2825fe96696e
	I0415 18:04:45.736703    7372 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 18:04:45.736703    7372 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"ea76f982-ccc6-4130-9a46-d10c628c8df0","resourceVersion":"428","creationTimestamp":"2024-04-15T18:02:00Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-15T18:02:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0415 18:04:45.741149    7372 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0415 18:04:45.743059    7372 addons.go:505] duration metric: took 10.258348s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0415 18:04:45.743059    7372 start.go:245] waiting for cluster config update ...
	I0415 18:04:45.743059    7372 start.go:254] writing updated cluster config ...
	I0415 18:04:45.757104    7372 ssh_runner.go:195] Run: rm -f paused
	I0415 18:04:45.913215    7372 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0415 18:04:45.916252    7372 out.go:177] * Done! kubectl is now configured to use "functional-831100" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 15 18:04:16 functional-831100 dockerd[4269]: time="2024-04-15T18:04:16.971692665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 15 18:04:16 functional-831100 dockerd[4269]: time="2024-04-15T18:04:16.971864663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 18:04:16 functional-831100 dockerd[4269]: time="2024-04-15T18:04:16.971888463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:04:16 functional-831100 dockerd[4269]: time="2024-04-15T18:04:16.972345959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:04:16 functional-831100 dockerd[4269]: time="2024-04-15T18:04:16.987854128Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 15 18:04:16 functional-831100 dockerd[4269]: time="2024-04-15T18:04:16.988620621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 15 18:04:16 functional-831100 dockerd[4269]: time="2024-04-15T18:04:16.988677221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 18:04:16 functional-831100 dockerd[4269]: time="2024-04-15T18:04:16.988712520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:04:16 functional-831100 dockerd[4269]: time="2024-04-15T18:04:16.988827319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:04:16 functional-831100 dockerd[4269]: time="2024-04-15T18:04:16.990148108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 18:04:16 functional-831100 dockerd[4269]: time="2024-04-15T18:04:16.990181008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:04:16 functional-831100 dockerd[4269]: time="2024-04-15T18:04:16.990297907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:04:21 functional-831100 cri-dockerd[4491]: time="2024-04-15T18:04:21Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Apr 15 18:04:22 functional-831100 dockerd[4269]: time="2024-04-15T18:04:22.760208878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 15 18:04:22 functional-831100 dockerd[4269]: time="2024-04-15T18:04:22.760402176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 18:04:22 functional-831100 dockerd[4269]: time="2024-04-15T18:04:22.761369669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:04:22 functional-831100 dockerd[4269]: time="2024-04-15T18:04:22.761678267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:04:22 functional-831100 dockerd[4269]: time="2024-04-15T18:04:22.822500123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 15 18:04:22 functional-831100 dockerd[4269]: time="2024-04-15T18:04:22.822635822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 18:04:22 functional-831100 dockerd[4269]: time="2024-04-15T18:04:22.822955220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:04:22 functional-831100 dockerd[4269]: time="2024-04-15T18:04:22.823250617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:04:22 functional-831100 dockerd[4269]: time="2024-04-15T18:04:22.854719688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 15 18:04:22 functional-831100 dockerd[4269]: time="2024-04-15T18:04:22.859451053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 18:04:22 functional-831100 dockerd[4269]: time="2024-04-15T18:04:22.859595852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:04:22 functional-831100 dockerd[4269]: time="2024-04-15T18:04:22.860104248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4bf55010d46f4       a1d263b5dc5b0       2 minutes ago       Running             kube-proxy                1                   ede1ba859aae4       kube-proxy-sfdhl
	611ab1c8d5640       cbb01a7bd410d       2 minutes ago       Running             coredns                   1                   203535709fe3b       coredns-76f75df574-sd42f
	0bb076dbb61b8       6e38f40d628db       2 minutes ago       Running             storage-provisioner       1                   a3ba6b2c24a77       storage-provisioner
	4a80b94330d44       3861cfcd7c04c       2 minutes ago       Running             etcd                      1                   b2448eee77b5d       etcd-functional-831100
	7e699c01da091       8c390d98f50c0       2 minutes ago       Running             kube-scheduler            1                   78098e4c7d9e9       kube-scheduler-functional-831100
	0dc4db42442b3       39f995c9f1996       2 minutes ago       Running             kube-apiserver            1                   710b4528a9c66       kube-apiserver-functional-831100
	f95aea2087ba7       6052a25da3f97       2 minutes ago       Running             kube-controller-manager   1                   b520a86228ada       kube-controller-manager-functional-831100
	75a1acb33c128       6e38f40d628db       4 minutes ago       Exited              storage-provisioner       0                   7ad3153f9e9db       storage-provisioner
	da75672ff19a7       cbb01a7bd410d       4 minutes ago       Exited              coredns                   0                   fec28243b30de       coredns-76f75df574-sd42f
	f28bec73517a3       a1d263b5dc5b0       4 minutes ago       Exited              kube-proxy                0                   6bc4a2c98c178       kube-proxy-sfdhl
	438e7aa22ff16       8c390d98f50c0       5 minutes ago       Exited              kube-scheduler            0                   fae332a0ecc29       kube-scheduler-functional-831100
	c902f023614f3       6052a25da3f97       5 minutes ago       Exited              kube-controller-manager   0                   9d2c2ef3c4269       kube-controller-manager-functional-831100
	698fa3050fb38       39f995c9f1996       5 minutes ago       Exited              kube-apiserver            0                   8cf9693690cd6       kube-apiserver-functional-831100
	765386ae687c2       3861cfcd7c04c       5 minutes ago       Exited              etcd                      0                   1c40010a4a72a       etcd-functional-831100
	
	
	==> coredns [611ab1c8d564] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e2b9de1191510a72356755223f06623b152d8cdd72ea393cca47fb3d34a5414574050e77e521fd64fc84b7e18fcd0fb5ead79ecf0a5a8be221bd0ffeb8c0080c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:43085 - 24985 "HINFO IN 8867340674663374237.6267907927489852796. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.069602105s
	
	
	==> coredns [da75672ff19a] <==
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2119759742]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Apr-2024 18:01:51.393) (total time: 30001ms):
	Trace[2119759742]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:02:21.394)
	Trace[2119759742]: [30.001515777s] [30.001515777s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[216589190]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Apr-2024 18:01:51.394) (total time: 30001ms):
	Trace[216589190]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:02:21.394)
	Trace[216589190]: [30.001448873s] [30.001448873s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1293992704]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Apr-2024 18:01:51.394) (total time: 30001ms):
	Trace[1293992704]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:02:21.395)
	Trace[1293992704]: [30.001116567s] [30.001116567s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = e2b9de1191510a72356755223f06623b152d8cdd72ea393cca47fb3d34a5414574050e77e521fd64fc84b7e18fcd0fb5ead79ecf0a5a8be221bd0ffeb8c0080c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:36203 - 3395 "HINFO IN 7589039472594549754.5687282001112301883. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.038380368s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-831100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-831100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c
	                    minikube.k8s.io/name=functional-831100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_15T18_01_36_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 18:01:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-831100
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Apr 2024 18:06:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Apr 2024 18:05:52 +0000   Mon, 15 Apr 2024 18:01:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Apr 2024 18:05:52 +0000   Mon, 15 Apr 2024 18:01:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Apr 2024 18:05:52 +0000   Mon, 15 Apr 2024 18:01:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Apr 2024 18:05:52 +0000   Mon, 15 Apr 2024 18:01:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.62.76
	  Hostname:    functional-831100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 937311e2b303483183403bfffcca89cc
	  System UUID:                dd87471f-4b8e-c849-a609-9b52f3761595
	  Boot ID:                    02992dcd-8881-4a11-8360-4fd634c0333e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-sd42f                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m51s
	  kube-system                 etcd-functional-831100                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m4s
	  kube-system                 kube-apiserver-functional-831100             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 kube-controller-manager-functional-831100    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	  kube-system                 kube-proxy-sfdhl                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	  kube-system                 kube-scheduler-functional-831100             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m49s                  kube-proxy       
	  Normal  Starting                 2m17s                  kube-proxy       
	  Normal  NodeHasSufficientPID     5m14s (x7 over 5m14s)  kubelet          Node functional-831100 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m14s (x8 over 5m14s)  kubelet          Node functional-831100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m14s (x8 over 5m14s)  kubelet          Node functional-831100 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  5m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m5s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m5s                   kubelet          Node functional-831100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m5s                   kubelet          Node functional-831100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m5s                   kubelet          Node functional-831100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m4s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m3s                   kubelet          Node functional-831100 status is now: NodeReady
	  Normal  RegisteredNode           4m52s                  node-controller  Node functional-831100 event: Registered Node functional-831100 in Controller
	  Normal  Starting                 2m24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m24s (x8 over 2m24s)  kubelet          Node functional-831100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m24s (x8 over 2m24s)  kubelet          Node functional-831100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m24s (x7 over 2m24s)  kubelet          Node functional-831100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m6s                   node-controller  Node functional-831100 event: Registered Node functional-831100 in Controller
	
	
	==> dmesg <==
	[  +5.485750] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.759363] systemd-fstab-generator[1515]: Ignoring "noauto" option for root device
	[  +8.342457] systemd-fstab-generator[1729]: Ignoring "noauto" option for root device
	[  +0.114637] kauditd_printk_skb: 51 callbacks suppressed
	[  +9.391139] systemd-fstab-generator[2135]: Ignoring "noauto" option for root device
	[  +0.141085] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.081053] systemd-fstab-generator[2378]: Ignoring "noauto" option for root device
	[  +0.237870] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.698635] kauditd_printk_skb: 88 callbacks suppressed
	[Apr15 18:02] kauditd_printk_skb: 10 callbacks suppressed
	[Apr15 18:03] systemd-fstab-generator[3778]: Ignoring "noauto" option for root device
	[  +0.745615] systemd-fstab-generator[3813]: Ignoring "noauto" option for root device
	[  +0.282816] systemd-fstab-generator[3825]: Ignoring "noauto" option for root device
	[  +0.351470] systemd-fstab-generator[3839]: Ignoring "noauto" option for root device
	[Apr15 18:04] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.099179] systemd-fstab-generator[4440]: Ignoring "noauto" option for root device
	[  +0.234496] systemd-fstab-generator[4452]: Ignoring "noauto" option for root device
	[  +0.262461] systemd-fstab-generator[4464]: Ignoring "noauto" option for root device
	[  +0.311600] systemd-fstab-generator[4480]: Ignoring "noauto" option for root device
	[  +0.994263] systemd-fstab-generator[4636]: Ignoring "noauto" option for root device
	[  +1.554998] kauditd_printk_skb: 140 callbacks suppressed
	[  +2.690350] systemd-fstab-generator[5109]: Ignoring "noauto" option for root device
	[  +7.105484] kauditd_printk_skb: 76 callbacks suppressed
	[ +11.513687] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.263796] systemd-fstab-generator[5629]: Ignoring "noauto" option for root device
	
	
	==> etcd [4a80b94330d4] <==
	{"level":"info","ts":"2024-04-15T18:04:17.754844Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-15T18:04:17.755479Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-15T18:04:17.768154Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-15T18:04:17.768846Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"9f9d5115a7bf17f9","initial-advertise-peer-urls":["https://172.19.62.76:2380"],"listen-peer-urls":["https://172.19.62.76:2380"],"advertise-client-urls":["https://172.19.62.76:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.62.76:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-15T18:04:17.771025Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-15T18:04:17.768398Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.19.62.76:2380"}
	{"level":"info","ts":"2024-04-15T18:04:17.775615Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.19.62.76:2380"}
	{"level":"info","ts":"2024-04-15T18:04:17.772654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f9d5115a7bf17f9 switched to configuration voters=(11501438176824596473)"}
	{"level":"info","ts":"2024-04-15T18:04:17.77615Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cf5a4d31b4abda62","local-member-id":"9f9d5115a7bf17f9","added-peer-id":"9f9d5115a7bf17f9","added-peer-peer-urls":["https://172.19.62.76:2380"]}
	{"level":"info","ts":"2024-04-15T18:04:17.776711Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cf5a4d31b4abda62","local-member-id":"9f9d5115a7bf17f9","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T18:04:17.777209Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T18:04:19.355527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f9d5115a7bf17f9 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-15T18:04:19.356372Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f9d5115a7bf17f9 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-15T18:04:19.356675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f9d5115a7bf17f9 received MsgPreVoteResp from 9f9d5115a7bf17f9 at term 2"}
	{"level":"info","ts":"2024-04-15T18:04:19.356845Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f9d5115a7bf17f9 became candidate at term 3"}
	{"level":"info","ts":"2024-04-15T18:04:19.356864Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f9d5115a7bf17f9 received MsgVoteResp from 9f9d5115a7bf17f9 at term 3"}
	{"level":"info","ts":"2024-04-15T18:04:19.356877Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f9d5115a7bf17f9 became leader at term 3"}
	{"level":"info","ts":"2024-04-15T18:04:19.356886Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f9d5115a7bf17f9 elected leader 9f9d5115a7bf17f9 at term 3"}
	{"level":"info","ts":"2024-04-15T18:04:19.36665Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"9f9d5115a7bf17f9","local-member-attributes":"{Name:functional-831100 ClientURLs:[https://172.19.62.76:2379]}","request-path":"/0/members/9f9d5115a7bf17f9/attributes","cluster-id":"cf5a4d31b4abda62","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-15T18:04:19.366894Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-15T18:04:19.366956Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-15T18:04:19.367488Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-15T18:04:19.366984Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-15T18:04:19.369775Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.62.76:2379"}
	{"level":"info","ts":"2024-04-15T18:04:19.371244Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [765386ae687c] <==
	{"level":"info","ts":"2024-04-15T18:01:28.582297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f9d5115a7bf17f9 became candidate at term 2"}
	{"level":"info","ts":"2024-04-15T18:01:28.58251Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f9d5115a7bf17f9 received MsgVoteResp from 9f9d5115a7bf17f9 at term 2"}
	{"level":"info","ts":"2024-04-15T18:01:28.582665Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f9d5115a7bf17f9 became leader at term 2"}
	{"level":"info","ts":"2024-04-15T18:01:28.582848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f9d5115a7bf17f9 elected leader 9f9d5115a7bf17f9 at term 2"}
	{"level":"info","ts":"2024-04-15T18:01:28.594444Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T18:01:28.599641Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"9f9d5115a7bf17f9","local-member-attributes":"{Name:functional-831100 ClientURLs:[https://172.19.62.76:2379]}","request-path":"/0/members/9f9d5115a7bf17f9/attributes","cluster-id":"cf5a4d31b4abda62","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-15T18:01:28.600538Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-15T18:01:28.605292Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-15T18:01:28.62525Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-15T18:01:28.637083Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-15T18:01:28.631408Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-15T18:01:28.639252Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.62.76:2379"}
	{"level":"info","ts":"2024-04-15T18:01:28.63153Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cf5a4d31b4abda62","local-member-id":"9f9d5115a7bf17f9","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T18:01:28.664871Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T18:01:28.691491Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T18:03:56.602085Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-15T18:03:56.602205Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-831100","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.19.62.76:2380"],"advertise-client-urls":["https://172.19.62.76:2379"]}
	{"level":"warn","ts":"2024-04-15T18:03:56.602361Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-15T18:03:56.602469Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-15T18:03:56.672748Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 172.19.62.76:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-15T18:03:56.672847Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 172.19.62.76:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-15T18:03:56.673073Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f9d5115a7bf17f9","current-leader-member-id":"9f9d5115a7bf17f9"}
	{"level":"info","ts":"2024-04-15T18:03:56.685274Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"172.19.62.76:2380"}
	{"level":"info","ts":"2024-04-15T18:03:56.685473Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"172.19.62.76:2380"}
	{"level":"info","ts":"2024-04-15T18:03:56.685494Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-831100","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.19.62.76:2380"],"advertise-client-urls":["https://172.19.62.76:2379"]}
	
	
	==> kernel <==
	 18:06:40 up 7 min,  0 users,  load average: 0.30, 0.42, 0.20
	Linux functional-831100 5.10.207 #1 SMP Mon Apr 15 15:01:07 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0dc4db42442b] <==
	I0415 18:04:21.130485       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0415 18:04:21.130727       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0415 18:04:21.131069       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0415 18:04:21.244554       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0415 18:04:21.276803       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0415 18:04:21.283991       1 shared_informer.go:318] Caches are synced for configmaps
	I0415 18:04:21.284203       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0415 18:04:21.284286       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0415 18:04:21.285224       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0415 18:04:21.288347       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0415 18:04:21.316574       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0415 18:04:21.317585       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0415 18:04:21.318180       1 aggregator.go:165] initial CRD sync complete...
	I0415 18:04:21.322103       1 autoregister_controller.go:141] Starting autoregister controller
	I0415 18:04:21.322382       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0415 18:04:21.323047       1 cache.go:39] Caches are synced for autoregister controller
	E0415 18:04:21.326462       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0415 18:04:22.084709       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0415 18:04:23.683384       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0415 18:04:23.707053       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0415 18:04:23.775521       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0415 18:04:23.834312       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0415 18:04:23.850189       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0415 18:04:34.342116       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0415 18:04:34.374717       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [698fa3050fb3] <==
	W0415 18:04:05.841388       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:04:05.845734       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:04:05.901595       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:04:05.926567       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:04:05.984746       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:04:06.014302       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:04:06.014396       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:04:06.020009       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:04:06.030233       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:04:06.069579       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:04:06.089535       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:04:06.099527       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:04:06.132408       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:04:06.211356       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:04:06.291386       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:04:06.349493       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:04:06.356317       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:04:06.416458       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:04:06.420785       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:04:06.423422       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:04:06.440252       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:04:06.493602       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:04:06.507867       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:04:06.525013       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:04:06.533872       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [c902f023614f] <==
	I0415 18:01:49.088306       1 event.go:376] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-76f75df574 to 2"
	I0415 18:01:49.088748       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-sfdhl"
	I0415 18:01:49.096424       1 shared_informer.go:318] Caches are synced for job
	I0415 18:01:49.234611       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-92d4s"
	I0415 18:01:49.268486       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-sd42f"
	I0415 18:01:49.283824       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="219.095863ms"
	I0415 18:01:49.407546       1 shared_informer.go:318] Caches are synced for garbage collector
	I0415 18:01:49.407618       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0415 18:01:49.416500       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="132.505705ms"
	I0415 18:01:49.416844       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="102.198µs"
	I0415 18:01:49.443495       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="396.396µs"
	I0415 18:01:49.478814       1 shared_informer.go:318] Caches are synced for garbage collector
	I0415 18:01:51.023167       1 event.go:376] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-76f75df574 to 1 from 2"
	I0415 18:01:51.074099       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-76f75df574-92d4s"
	I0415 18:01:51.125066       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="107.822994ms"
	I0415 18:01:51.158101       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="32.569912ms"
	I0415 18:01:51.158945       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="122.901µs"
	I0415 18:01:52.054219       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="101.901µs"
	I0415 18:01:52.107988       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="69.701µs"
	I0415 18:02:01.703256       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="782.312µs"
	I0415 18:02:02.206614       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="64.901µs"
	I0415 18:02:02.229376       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="722.11µs"
	I0415 18:02:02.252595       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="80.701µs"
	I0415 18:02:29.686919       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="27.048833ms"
	I0415 18:02:29.689752       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="2.660133ms"
	
	
	==> kube-controller-manager [f95aea2087ba] <==
	I0415 18:04:34.371844       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0415 18:04:34.374949       1 shared_informer.go:318] Caches are synced for service account
	I0415 18:04:34.377426       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0415 18:04:34.378403       1 shared_informer.go:318] Caches are synced for GC
	I0415 18:04:34.378544       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0415 18:04:34.383032       1 shared_informer.go:318] Caches are synced for daemon sets
	I0415 18:04:34.385025       1 shared_informer.go:318] Caches are synced for namespace
	I0415 18:04:34.385229       1 shared_informer.go:318] Caches are synced for deployment
	I0415 18:04:34.386253       1 shared_informer.go:318] Caches are synced for ephemeral
	I0415 18:04:34.396069       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0415 18:04:34.396160       1 shared_informer.go:318] Caches are synced for taint
	I0415 18:04:34.396299       1 shared_informer.go:318] Caches are synced for HPA
	I0415 18:04:34.397431       1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone=""
	I0415 18:04:34.398188       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-831100"
	I0415 18:04:34.401540       1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0415 18:04:34.398561       1 event.go:376] "Event occurred" object="functional-831100" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-831100 event: Registered Node functional-831100 in Controller"
	I0415 18:04:34.402676       1 shared_informer.go:318] Caches are synced for persistent volume
	I0415 18:04:34.429786       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0415 18:04:34.453963       1 shared_informer.go:318] Caches are synced for resource quota
	I0415 18:04:34.471174       1 shared_informer.go:318] Caches are synced for cronjob
	I0415 18:04:34.486863       1 shared_informer.go:318] Caches are synced for job
	I0415 18:04:34.538435       1 shared_informer.go:318] Caches are synced for resource quota
	I0415 18:04:34.853307       1 shared_informer.go:318] Caches are synced for garbage collector
	I0415 18:04:34.853544       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0415 18:04:34.912722       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-proxy [4bf55010d46f] <==
	I0415 18:04:23.084673       1 server_others.go:72] "Using iptables proxy"
	I0415 18:04:23.098753       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.62.76"]
	I0415 18:04:23.150628       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0415 18:04:23.150727       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0415 18:04:23.150745       1 server_others.go:168] "Using iptables Proxier"
	I0415 18:04:23.154502       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0415 18:04:23.155214       1 server.go:865] "Version info" version="v1.29.3"
	I0415 18:04:23.155370       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0415 18:04:23.156865       1 config.go:188] "Starting service config controller"
	I0415 18:04:23.156956       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0415 18:04:23.157106       1 config.go:97] "Starting endpoint slice config controller"
	I0415 18:04:23.157178       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0415 18:04:23.158079       1 config.go:315] "Starting node config controller"
	I0415 18:04:23.158115       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0415 18:04:23.257950       1 shared_informer.go:318] Caches are synced for service config
	I0415 18:04:23.258038       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0415 18:04:23.258334       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [f28bec73517a] <==
	I0415 18:01:51.158905       1 server_others.go:72] "Using iptables proxy"
	I0415 18:01:51.201289       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.62.76"]
	I0415 18:01:51.384375       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0415 18:01:51.384480       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0415 18:01:51.384515       1 server_others.go:168] "Using iptables Proxier"
	I0415 18:01:51.388503       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0415 18:01:51.388915       1 server.go:865] "Version info" version="v1.29.3"
	I0415 18:01:51.389476       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0415 18:01:51.390798       1 config.go:188] "Starting service config controller"
	I0415 18:01:51.391006       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0415 18:01:51.391767       1 config.go:97] "Starting endpoint slice config controller"
	I0415 18:01:51.391965       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0415 18:01:51.393022       1 config.go:315] "Starting node config controller"
	I0415 18:01:51.395535       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0415 18:01:51.491890       1 shared_informer.go:318] Caches are synced for service config
	I0415 18:01:51.493241       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0415 18:01:51.496050       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [438e7aa22ff1] <==
	W0415 18:01:32.762692       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0415 18:01:32.762812       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0415 18:01:32.769055       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0415 18:01:32.769625       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0415 18:01:32.776015       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0415 18:01:32.776042       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0415 18:01:32.805712       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0415 18:01:32.805738       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0415 18:01:32.816235       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0415 18:01:32.817747       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0415 18:01:32.822512       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0415 18:01:32.822626       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0415 18:01:32.888453       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0415 18:01:32.888915       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0415 18:01:32.941590       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0415 18:01:32.942151       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0415 18:01:32.974660       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0415 18:01:32.974696       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0415 18:01:33.150505       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0415 18:01:33.151464       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0415 18:01:36.104840       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0415 18:03:56.526584       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0415 18:03:56.527447       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0415 18:03:56.528205       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0415 18:03:56.528386       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [7e699c01da09] <==
	I0415 18:04:18.874563       1 serving.go:380] Generated self-signed cert in-memory
	W0415 18:04:21.160355       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0415 18:04:21.160718       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0415 18:04:21.160955       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0415 18:04:21.161157       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0415 18:04:21.252722       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0415 18:04:21.254987       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0415 18:04:21.271291       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0415 18:04:21.272715       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0415 18:04:21.274819       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0415 18:04:21.276781       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0415 18:04:21.375042       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 15 18:04:21 functional-831100 kubelet[5132]: I0415 18:04:21.327953    5132 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 15 18:04:21 functional-831100 kubelet[5132]: I0415 18:04:21.329513    5132 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 15 18:04:21 functional-831100 kubelet[5132]: E0415 18:04:21.766823    5132 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"etcd-functional-831100\" already exists" pod="kube-system/etcd-functional-831100"
	Apr 15 18:04:22 functional-831100 kubelet[5132]: I0415 18:04:22.101619    5132 apiserver.go:52] "Watching apiserver"
	Apr 15 18:04:22 functional-831100 kubelet[5132]: I0415 18:04:22.105295    5132 topology_manager.go:215] "Topology Admit Handler" podUID="e82d2eca-3bbb-407f-9639-db448fa365db" podNamespace="kube-system" podName="kube-proxy-sfdhl"
	Apr 15 18:04:22 functional-831100 kubelet[5132]: I0415 18:04:22.105427    5132 topology_manager.go:215] "Topology Admit Handler" podUID="a05305e5-a9c7-4bee-9329-bc4608f0f7b8" podNamespace="kube-system" podName="coredns-76f75df574-sd42f"
	Apr 15 18:04:22 functional-831100 kubelet[5132]: I0415 18:04:22.105539    5132 topology_manager.go:215] "Topology Admit Handler" podUID="9494c9a1-8863-43cd-91b2-67524861807c" podNamespace="kube-system" podName="storage-provisioner"
	Apr 15 18:04:22 functional-831100 kubelet[5132]: I0415 18:04:22.200487    5132 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Apr 15 18:04:22 functional-831100 kubelet[5132]: I0415 18:04:22.205964    5132 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e82d2eca-3bbb-407f-9639-db448fa365db-xtables-lock\") pod \"kube-proxy-sfdhl\" (UID: \"e82d2eca-3bbb-407f-9639-db448fa365db\") " pod="kube-system/kube-proxy-sfdhl"
	Apr 15 18:04:22 functional-831100 kubelet[5132]: I0415 18:04:22.206277    5132 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e82d2eca-3bbb-407f-9639-db448fa365db-lib-modules\") pod \"kube-proxy-sfdhl\" (UID: \"e82d2eca-3bbb-407f-9639-db448fa365db\") " pod="kube-system/kube-proxy-sfdhl"
	Apr 15 18:04:22 functional-831100 kubelet[5132]: I0415 18:04:22.206368    5132 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9494c9a1-8863-43cd-91b2-67524861807c-tmp\") pod \"storage-provisioner\" (UID: \"9494c9a1-8863-43cd-91b2-67524861807c\") " pod="kube-system/storage-provisioner"
	Apr 15 18:04:22 functional-831100 kubelet[5132]: I0415 18:04:22.406487    5132 scope.go:117] "RemoveContainer" containerID="75a1acb33c1286f9bfa98234755282f0e567ec29c3bb22cca37139b364f598cb"
	Apr 15 18:04:22 functional-831100 kubelet[5132]: I0415 18:04:22.410773    5132 scope.go:117] "RemoveContainer" containerID="da75672ff19a7a1bc60f533896b1191a84b59c432c9b9c40b6b77a7459569daf"
	Apr 15 18:04:22 functional-831100 kubelet[5132]: I0415 18:04:22.411572    5132 scope.go:117] "RemoveContainer" containerID="f28bec73517a328a2ef0a25f21cad77dec6f5aff0f0e5d9c0a615ac8c0af9c6a"
	Apr 15 18:04:28 functional-831100 kubelet[5132]: I0415 18:04:28.767994    5132 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 15 18:05:16 functional-831100 kubelet[5132]: E0415 18:05:16.236067    5132 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:05:16 functional-831100 kubelet[5132]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:05:16 functional-831100 kubelet[5132]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:05:16 functional-831100 kubelet[5132]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:05:16 functional-831100 kubelet[5132]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 18:06:16 functional-831100 kubelet[5132]: E0415 18:06:16.236144    5132 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:06:16 functional-831100 kubelet[5132]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:06:16 functional-831100 kubelet[5132]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:06:16 functional-831100 kubelet[5132]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:06:16 functional-831100 kubelet[5132]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [0bb076dbb61b] <==
	I0415 18:04:22.923003       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0415 18:04:22.961951       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0415 18:04:22.962023       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0415 18:04:40.404235       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0415 18:04:40.404508       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-831100_20f3ea5d-d239-4e7a-b6ec-dc45199e7b9b!
	I0415 18:04:40.405992       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3e89f036-8a93-4604-a9f4-7523b82958c5", APIVersion:"v1", ResourceVersion:"612", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-831100_20f3ea5d-d239-4e7a-b6ec-dc45199e7b9b became leader
	I0415 18:04:40.505730       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-831100_20f3ea5d-d239-4e7a-b6ec-dc45199e7b9b!
	
	
	==> storage-provisioner [75a1acb33c12] <==
	I0415 18:01:59.001853       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0415 18:01:59.020270       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0415 18:01:59.021594       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0415 18:01:59.042257       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0415 18:01:59.042839       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-831100_b1b37190-2579-42e3-b917-d85859c9ece3!
	I0415 18:01:59.044196       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3e89f036-8a93-4604-a9f4-7523b82958c5", APIVersion:"v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-831100_b1b37190-2579-42e3-b917-d85859c9ece3 became leader
	I0415 18:01:59.143851       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-831100_b1b37190-2579-42e3-b917-d85859c9ece3!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:06:31.877922    9820 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-831100 -n functional-831100
E0415 18:06:53.542952   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-831100 -n functional-831100: (13.1948631s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-831100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (37.11s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-831100 config unset cpus" to be -""- but got *"W0415 18:09:57.179320    1732 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-831100 config get cpus: exit status 14 (288.3189ms)

                                                
                                                
** stderr ** 
	W0415 18:09:57.561506    7928 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-831100 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0415 18:09:57.561506    7928 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-831100 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0415 18:09:57.882952   10016 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-831100 config get cpus" to be -""- but got *"W0415 18:09:58.259103   14020 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-831100 config unset cpus" to be -""- but got *"W0415 18:09:58.565552    5752 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-831100 config get cpus: exit status 14 (319.1119ms)

                                                
                                                
** stderr ** 
	W0415 18:09:58.883488    9460 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-831100 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0415 18:09:58.883488    9460 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-831100 service --namespace=default --https --url hello-node: exit status 1 (15.0259357s)

                                                
                                                
** stderr ** 
	W0415 18:12:08.400770    3884 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1507: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-831100 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-831100 service hello-node --url --format={{.IP}}: exit status 1 (15.0285759s)

                                                
                                                
** stderr ** 
	W0415 18:12:23.449717    9612 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-831100 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1544: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-831100 service hello-node --url: exit status 1 (15.0306873s)

                                                
                                                
** stderr ** 
	W0415 18:12:38.466560    9976 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1557: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-831100 service hello-node --url": exit status 1
functional_test.go:1561: found endpoint for hello-node: 
functional_test.go:1569: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (481.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-653100 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0415 18:20:10.507327   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
E0415 18:20:10.522321   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
E0415 18:20:10.538366   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
E0415 18:20:10.569516   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
E0415 18:20:10.617420   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
E0415 18:20:10.712935   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
E0415 18:20:10.887420   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
E0415 18:20:11.222598   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
E0415 18:20:11.872895   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
E0415 18:20:13.160598   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
E0415 18:20:15.726303   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
E0415 18:20:20.860227   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
E0415 18:20:31.112782   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
E0415 18:20:51.603933   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
E0415 18:21:32.574525   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
E0415 18:21:53.547118   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
E0415 18:22:54.502570   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
E0415 18:24:56.746951   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
E0415 18:25:10.500583   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
E0415 18:25:38.344880   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p ha-653100 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: exit status 90 (7m24.6725666s)

                                                
                                                
-- stdout --
	* [ha-653100] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "ha-653100" primary control-plane node in "ha-653100" cluster
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	* Starting "ha-653100-m02" control-plane node in "ha-653100" cluster
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Found network options:
	  - NO_PROXY=172.19.63.147
	  - NO_PROXY=172.19.63.147
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:19:03.346266   10384 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0415 18:19:03.428900   10384 out.go:291] Setting OutFile to fd 956 ...
	I0415 18:19:03.429535   10384 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:19:03.429535   10384 out.go:304] Setting ErrFile to fd 892...
	I0415 18:19:03.429535   10384 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:19:03.456152   10384 out.go:298] Setting JSON to false
	I0415 18:19:03.460969   10384 start.go:129] hostinfo: {"hostname":"minikube6","uptime":16870,"bootTime":1713188273,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0415 18:19:03.460969   10384 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 18:19:03.468944   10384 out.go:177] * [ha-653100] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0415 18:19:03.471713   10384 notify.go:220] Checking for updates...
	I0415 18:19:03.474175   10384 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 18:19:03.479852   10384 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 18:19:03.482821   10384 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0415 18:19:03.485193   10384 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 18:19:03.488098   10384 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 18:19:03.491472   10384 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 18:19:09.177227   10384 out.go:177] * Using the hyperv driver based on user configuration
	I0415 18:19:09.180711   10384 start.go:297] selected driver: hyperv
	I0415 18:19:09.180711   10384 start.go:901] validating driver "hyperv" against <nil>
	I0415 18:19:09.180711   10384 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 18:19:09.231415   10384 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 18:19:09.233116   10384 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 18:19:09.233296   10384 cni.go:84] Creating CNI manager for ""
	I0415 18:19:09.233296   10384 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0415 18:19:09.233296   10384 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0415 18:19:09.233503   10384 start.go:340] cluster config:
	{Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-653100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 18:19:09.233896   10384 iso.go:125] acquiring lock: {Name:mkb11aac800c033551a31c7a773c0461f92e4459 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 18:19:09.237716   10384 out.go:177] * Starting "ha-653100" primary control-plane node in "ha-653100" cluster
	I0415 18:19:09.241624   10384 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:19:09.241887   10384 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 18:19:09.241939   10384 cache.go:56] Caching tarball of preloaded images
	I0415 18:19:09.242318   10384 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 18:19:09.242373   10384 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 18:19:09.243280   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:19:09.243280   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json: {Name:mk9fcf3e86096a1c3d878c2c5f55d5a5acd00e38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:19:09.244971   10384 start.go:360] acquireMachinesLock for ha-653100: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 18:19:09.244971   10384 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-653100"
	I0415 18:19:09.244971   10384 start.go:93] Provisioning new machine with config: &{Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-653100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 18:19:09.244971   10384 start.go:125] createHost starting for "" (driver="hyperv")
	I0415 18:19:09.247899   10384 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 18:19:09.247899   10384 start.go:159] libmachine.API.Create for "ha-653100" (driver="hyperv")
	I0415 18:19:09.247899   10384 client.go:168] LocalClient.Create starting
	I0415 18:19:09.248830   10384 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0415 18:19:09.249101   10384 main.go:141] libmachine: Decoding PEM data...
	I0415 18:19:09.249148   10384 main.go:141] libmachine: Parsing certificate...
	I0415 18:19:09.249148   10384 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0415 18:19:09.249148   10384 main.go:141] libmachine: Decoding PEM data...
	I0415 18:19:09.249148   10384 main.go:141] libmachine: Parsing certificate...
	I0415 18:19:09.249731   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0415 18:19:11.419777   10384 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0415 18:19:11.419777   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:11.420812   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0415 18:19:13.280108   10384 main.go:141] libmachine: [stdout =====>] : False
	
	I0415 18:19:13.280108   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:13.280637   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 18:19:14.855241   10384 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 18:19:14.855241   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:14.855241   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 18:19:18.733923   10384 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 18:19:18.734210   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:18.736243   10384 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0415 18:19:19.289879   10384 main.go:141] libmachine: Creating SSH key...
	I0415 18:19:19.400622   10384 main.go:141] libmachine: Creating VM...
	I0415 18:19:19.400622   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 18:19:22.473592   10384 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 18:19:22.473592   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:22.473592   10384 main.go:141] libmachine: Using switch "Default Switch"
	I0415 18:19:22.473592   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 18:19:24.358372   10384 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 18:19:24.358573   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:24.358573   10384 main.go:141] libmachine: Creating VHD
	I0415 18:19:24.358573   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\fixed.vhd' -SizeBytes 10MB -Fixed
	I0415 18:19:28.369440   10384 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 650E0F4D-34EC-4EE4-B011-F395B7FC2B3C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0415 18:19:28.369525   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:28.369525   10384 main.go:141] libmachine: Writing magic tar header
	I0415 18:19:28.369609   10384 main.go:141] libmachine: Writing SSH key tar header
	I0415 18:19:28.379115   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\disk.vhd' -VHDType Dynamic -DeleteSource
	I0415 18:19:31.701668   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:31.701668   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:31.702065   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\disk.vhd' -SizeBytes 20000MB
	I0415 18:19:34.409230   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:34.409287   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:34.409287   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-653100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0415 18:19:38.391213   10384 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-653100 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0415 18:19:38.391365   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:38.391448   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-653100 -DynamicMemoryEnabled $false
	I0415 18:19:40.850920   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:40.851446   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:40.851446   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-653100 -Count 2
	I0415 18:19:43.184748   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:43.185230   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:43.185314   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-653100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\boot2docker.iso'
	I0415 18:19:45.947867   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:45.947867   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:45.948906   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-653100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\disk.vhd'
	I0415 18:19:48.807697   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:48.807697   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:48.807697   10384 main.go:141] libmachine: Starting VM...
	I0415 18:19:48.808056   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-653100
	I0415 18:19:52.116173   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:52.117205   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:52.117205   10384 main.go:141] libmachine: Waiting for host to start...
	I0415 18:19:52.117276   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:19:54.557809   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:19:54.558376   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:54.558452   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:19:57.250722   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:57.250722   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:58.258291   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:00.584210   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:00.584210   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:00.584448   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:03.246620   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:20:03.247582   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:04.255962   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:06.600399   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:06.600459   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:06.600459   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:09.316612   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:20:09.316612   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:10.317022   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:12.741666   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:12.741972   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:12.742046   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:15.418020   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:20:15.418020   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:16.427460   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:18.790469   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:18.790783   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:18.790783   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:21.596566   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:21.596566   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:21.597345   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:23.951579   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:23.951579   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:23.951579   10384 machine.go:94] provisionDockerMachine start ...
	I0415 18:20:23.952606   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:26.247912   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:26.248135   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:26.248135   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:29.012297   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:29.012297   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:29.019039   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:20:29.032591   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:20:29.032673   10384 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 18:20:29.165965   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0415 18:20:29.165965   10384 buildroot.go:166] provisioning hostname "ha-653100"
	I0415 18:20:29.165965   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:31.462885   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:31.462885   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:31.462973   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:34.155427   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:34.156301   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:34.162944   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:20:34.163526   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:20:34.163526   10384 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-653100 && echo "ha-653100" | sudo tee /etc/hostname
	I0415 18:20:34.337418   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-653100
	
	I0415 18:20:34.337418   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:36.655518   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:36.655518   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:36.655812   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:39.380784   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:39.380784   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:39.389453   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:20:39.390401   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:20:39.390401   10384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-653100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-653100/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-653100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 18:20:39.543028   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 18:20:39.543028   10384 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0415 18:20:39.543028   10384 buildroot.go:174] setting up certificates
	I0415 18:20:39.543028   10384 provision.go:84] configureAuth start
	I0415 18:20:39.543611   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:41.851405   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:41.851611   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:41.851695   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:44.624640   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:44.624640   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:44.624640   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:46.878650   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:46.878650   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:46.879166   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:49.633681   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:49.633926   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:49.633926   10384 provision.go:143] copyHostCerts
	I0415 18:20:49.633926   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0415 18:20:49.634462   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0415 18:20:49.634462   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0415 18:20:49.635297   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0415 18:20:49.637549   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0415 18:20:49.637813   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0415 18:20:49.637813   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0415 18:20:49.637813   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0415 18:20:49.639233   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0415 18:20:49.639233   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0415 18:20:49.639233   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0415 18:20:49.639935   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0415 18:20:49.640957   10384 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-653100 san=[127.0.0.1 172.19.63.147 ha-653100 localhost minikube]
	I0415 18:20:49.905880   10384 provision.go:177] copyRemoteCerts
	I0415 18:20:49.922553   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 18:20:49.922553   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:52.259882   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:52.259882   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:52.259882   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:54.984473   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:54.984473   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:54.984987   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:20:55.101879   10384 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1791462s)
	I0415 18:20:55.101879   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0415 18:20:55.102059   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 18:20:55.153442   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0415 18:20:55.153917   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0415 18:20:55.199876   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0415 18:20:55.200448   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0415 18:20:55.254511   10384 provision.go:87] duration metric: took 15.7112643s to configureAuth
	I0415 18:20:55.254511   10384 buildroot.go:189] setting minikube options for container-runtime
	I0415 18:20:55.255352   10384 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:20:55.255474   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:57.547699   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:57.547699   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:57.547786   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:00.303241   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:00.303241   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:00.309852   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:00.310680   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:00.310680   10384 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 18:21:00.455641   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0415 18:21:00.455641   10384 buildroot.go:70] root file system type: tmpfs
	I0415 18:21:00.455641   10384 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 18:21:00.455641   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:02.740065   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:02.740065   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:02.740841   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:05.487209   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:05.487209   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:05.492437   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:05.493558   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:05.493558   10384 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 18:21:05.663243   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 18:21:05.663359   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:07.945804   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:07.946031   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:07.946031   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:10.668442   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:10.668442   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:10.674981   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:10.675100   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:10.675100   10384 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 18:21:12.959357   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0415 18:21:12.959357   10384 machine.go:97] duration metric: took 49.0073804s to provisionDockerMachine
	I0415 18:21:12.959357   10384 client.go:171] duration metric: took 2m3.7104605s to LocalClient.Create
	I0415 18:21:12.959357   10384 start.go:167] duration metric: took 2m3.7104605s to libmachine.API.Create "ha-653100"
	I0415 18:21:12.959357   10384 start.go:293] postStartSetup for "ha-653100" (driver="hyperv")
	I0415 18:21:12.959357   10384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 18:21:12.974666   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 18:21:12.974666   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:15.275980   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:15.275980   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:15.275980   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:18.019740   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:18.019762   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:18.019878   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:21:18.139960   10384 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1652527s)
	I0415 18:21:18.155380   10384 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 18:21:18.164559   10384 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 18:21:18.164559   10384 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0415 18:21:18.165434   10384 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0415 18:21:18.166112   10384 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> 112722.pem in /etc/ssl/certs
	I0415 18:21:18.166112   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /etc/ssl/certs/112722.pem
	I0415 18:21:18.180084   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 18:21:18.200844   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /etc/ssl/certs/112722.pem (1708 bytes)
	I0415 18:21:18.250132   10384 start.go:296] duration metric: took 5.2907331s for postStartSetup
	I0415 18:21:18.253937   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:20.531894   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:20.531894   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:20.531894   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:23.259067   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:23.259480   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:23.259754   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:21:23.262894   10384 start.go:128] duration metric: took 2m14.0167978s to createHost
	I0415 18:21:23.262950   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:25.573334   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:25.573334   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:25.573334   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:28.294984   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:28.295213   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:28.304032   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:28.304955   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:28.304955   10384 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0415 18:21:28.441121   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713205288.448859419
	
	I0415 18:21:28.441191   10384 fix.go:216] guest clock: 1713205288.448859419
	I0415 18:21:28.441191   10384 fix.go:229] Guest: 2024-04-15 18:21:28.448859419 +0000 UTC Remote: 2024-04-15 18:21:23.2629505 +0000 UTC m=+140.027670501 (delta=5.185908919s)
	I0415 18:21:28.441272   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:30.726887   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:30.727164   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:30.727164   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:33.517730   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:33.518861   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:33.525281   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:33.525856   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:33.525856   10384 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713205288
	I0415 18:21:33.684173   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 15 18:21:28 UTC 2024
	
	I0415 18:21:33.684173   10384 fix.go:236] clock set: Mon Apr 15 18:21:28 UTC 2024
	 (err=<nil>)
	I0415 18:21:33.684173   10384 start.go:83] releasing machines lock for "ha-653100", held for 2m24.4380391s
	I0415 18:21:33.684173   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:35.959004   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:35.959004   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:35.959004   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:38.693038   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:38.693586   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:38.698246   10384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 18:21:38.698432   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:38.709918   10384 ssh_runner.go:195] Run: cat /version.json
	I0415 18:21:38.709918   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:41.102451   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:43.920818   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:43.920972   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:43.921214   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:21:43.967273   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:43.967331   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:43.967331   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:21:44.091517   10384 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3921768s)
	I0415 18:21:44.091595   10384 ssh_runner.go:235] Completed: cat /version.json: (5.3815555s)
	I0415 18:21:44.105965   10384 ssh_runner.go:195] Run: systemctl --version
	I0415 18:21:44.128397   10384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0415 18:21:44.135680   10384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 18:21:44.149066   10384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 18:21:44.177790   10384 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 18:21:44.177790   10384 start.go:494] detecting cgroup driver to use...
	I0415 18:21:44.177790   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:21:44.228163   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0415 18:21:44.262529   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 18:21:44.285370   10384 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 18:21:44.301154   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 18:21:44.336472   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:21:44.370998   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 18:21:44.404889   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:21:44.438672   10384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 18:21:44.473968   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 18:21:44.507568   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 18:21:44.541278   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 18:21:44.574748   10384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 18:21:44.615798   10384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 18:21:44.656765   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:21:44.866329   10384 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 18:21:44.902355   10384 start.go:494] detecting cgroup driver to use...
	I0415 18:21:44.917364   10384 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0415 18:21:44.958576   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:21:44.995083   10384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 18:21:45.045436   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:21:45.084274   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:21:45.126708   10384 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0415 18:21:45.197837   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:21:45.224449   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:21:45.274212   10384 ssh_runner.go:195] Run: which cri-dockerd
	I0415 18:21:45.295670   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0415 18:21:45.317816   10384 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0415 18:21:45.364867   10384 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0415 18:21:45.594504   10384 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0415 18:21:45.794998   10384 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0415 18:21:45.795406   10384 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0415 18:21:45.851288   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:21:46.067106   10384 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 18:21:48.625712   10384 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5574711s)
	I0415 18:21:48.640151   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0415 18:21:48.681058   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 18:21:48.721545   10384 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0415 18:21:48.945328   10384 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0415 18:21:49.172462   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:21:49.400402   10384 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0415 18:21:49.448539   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 18:21:49.489496   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:21:49.703253   10384 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0415 18:21:49.816658   10384 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0415 18:21:49.830904   10384 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0415 18:21:49.840743   10384 start.go:562] Will wait 60s for crictl version
	I0415 18:21:49.855288   10384 ssh_runner.go:195] Run: which crictl
	I0415 18:21:49.875869   10384 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0415 18:21:49.936713   10384 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0415 18:21:49.947981   10384 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 18:21:49.993965   10384 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 18:21:50.032420   10384 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0415 18:21:50.032553   10384 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0415 18:21:50.037021   10384 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0415 18:21:50.037021   10384 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0415 18:21:50.037021   10384 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0415 18:21:50.037021   10384 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:d7:0e Flags:up|broadcast|multicast|running}
	I0415 18:21:50.039971   10384 ip.go:210] interface addr: fe80::6b0:6318:bc6e:fcda/64
	I0415 18:21:50.039971   10384 ip.go:210] interface addr: 172.19.48.1/20
	I0415 18:21:50.056064   10384 ssh_runner.go:195] Run: grep 172.19.48.1	host.minikube.internal$ /etc/hosts
	I0415 18:21:50.062649   10384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 18:21:50.097930   10384 kubeadm.go:877] updating cluster {Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:ha-653100 Namespace:default APIServerHAVIP:172.19.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.63.147 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0415 18:21:50.097930   10384 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:21:50.108473   10384 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 18:21:50.131644   10384 docker.go:685] Got preloaded images: 
	I0415 18:21:50.132600   10384 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0415 18:21:50.146104   10384 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0415 18:21:50.181885   10384 ssh_runner.go:195] Run: which lz4
	I0415 18:21:50.188111   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0415 18:21:50.202072   10384 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0415 18:21:50.209107   10384 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0415 18:21:50.209107   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0415 18:21:52.413614   10384 docker.go:649] duration metric: took 2.2254854s to copy over tarball
	I0415 18:21:52.429279   10384 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0415 18:22:01.379987   10384 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.9504893s)
	I0415 18:22:01.379987   10384 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0415 18:22:01.455511   10384 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0415 18:22:01.477182   10384 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0415 18:22:01.536289   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:22:01.768214   10384 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 18:22:04.398301   10384 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6300657s)
	I0415 18:22:04.408551   10384 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 18:22:04.433417   10384 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0415 18:22:04.433417   10384 cache_images.go:84] Images are preloaded, skipping loading
	I0415 18:22:04.433417   10384 kubeadm.go:928] updating node { 172.19.63.147 8443 v1.29.3 docker true true} ...
	I0415 18:22:04.433417   10384 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-653100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.63.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-653100 Namespace:default APIServerHAVIP:172.19.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0415 18:22:04.444220   10384 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0415 18:22:04.490342   10384 cni.go:84] Creating CNI manager for ""
	I0415 18:22:04.490402   10384 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 18:22:04.490472   10384 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0415 18:22:04.490526   10384 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.63.147 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-653100 NodeName:ha-653100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.63.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.63.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0415 18:22:04.490735   10384 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.63.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-653100"
	  kubeletExtraArgs:
	    node-ip: 172.19.63.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.63.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0415 18:22:04.490884   10384 kube-vip.go:111] generating kube-vip config ...
	I0415 18:22:04.505496   10384 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0415 18:22:04.536495   10384 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0415 18:22:04.536752   10384 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.63.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0415 18:22:04.551207   10384 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0415 18:22:04.567905   10384 binaries.go:44] Found k8s binaries, skipping transfer
	I0415 18:22:04.582348   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0415 18:22:04.604171   10384 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0415 18:22:04.646000   10384 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0415 18:22:04.692832   10384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0415 18:22:04.728604   10384 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1351 bytes)
	I0415 18:22:04.775922   10384 ssh_runner.go:195] Run: grep 172.19.63.254	control-plane.minikube.internal$ /etc/hosts
	I0415 18:22:04.783742   10384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.63.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 18:22:04.822733   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:22:05.055746   10384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 18:22:05.087598   10384 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100 for IP: 172.19.63.147
	I0415 18:22:05.087652   10384 certs.go:194] generating shared ca certs ...
	I0415 18:22:05.087652   10384 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.088303   10384 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0415 18:22:05.088303   10384 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0415 18:22:05.088915   10384 certs.go:256] generating profile certs ...
	I0415 18:22:05.089546   10384 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.key
	I0415 18:22:05.089739   10384 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.crt with IP's: []
	I0415 18:22:05.327013   10384 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.crt ...
	I0415 18:22:05.328010   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.crt: {Name:mka413e653e113856769234a348385e515e46303 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.329372   10384 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.key ...
	I0415 18:22:05.329372   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.key: {Name:mk12a79d6acd7fec5ddd98754bb23ab16e83b4f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.330112   10384 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key.c151ec5c
	I0415 18:22:05.331447   10384 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt.c151ec5c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.63.147 172.19.63.254]
	I0415 18:22:05.565428   10384 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt.c151ec5c ...
	I0415 18:22:05.565428   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt.c151ec5c: {Name:mk5c523ee813d33697660e99fb5da48b385701b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.567434   10384 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key.c151ec5c ...
	I0415 18:22:05.567434   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key.c151ec5c: {Name:mkeadeed87d8879714bf8100a4229bec1246f570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.568511   10384 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt.c151ec5c -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt
	I0415 18:22:05.585425   10384 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key.c151ec5c -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key
	I0415 18:22:05.586963   10384 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key
	I0415 18:22:05.587129   10384 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt with IP's: []
	I0415 18:22:05.748042   10384 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt ...
	I0415 18:22:05.749020   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt: {Name:mk92c7defdccaf790f51e1080d3836b064a3ba9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.749736   10384 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key ...
	I0415 18:22:05.749736   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key: {Name:mk071663552007da34f935841f25d643d746d544 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.751078   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0415 18:22:05.751078   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0415 18:22:05.751078   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0415 18:22:05.751078   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0415 18:22:05.752108   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0415 18:22:05.752265   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0415 18:22:05.752517   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0415 18:22:05.761320   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0415 18:22:05.761625   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem (1338 bytes)
	W0415 18:22:05.762397   10384 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272_empty.pem, impossibly tiny 0 bytes
	I0415 18:22:05.762397   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0415 18:22:05.762397   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0415 18:22:05.763315   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0415 18:22:05.763315   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0415 18:22:05.764136   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem (1708 bytes)
	I0415 18:22:05.764433   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /usr/share/ca-certificates/112722.pem
	I0415 18:22:05.764684   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:22:05.764840   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem -> /usr/share/ca-certificates/11272.pem
	I0415 18:22:05.766228   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0415 18:22:05.818285   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0415 18:22:05.869100   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0415 18:22:05.927943   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0415 18:22:05.982236   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0415 18:22:06.033436   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0415 18:22:06.088918   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0415 18:22:06.140228   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0415 18:22:06.194914   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /usr/share/ca-certificates/112722.pem (1708 bytes)
	I0415 18:22:06.244585   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0415 18:22:06.295695   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem --> /usr/share/ca-certificates/11272.pem (1338 bytes)
	I0415 18:22:06.348962   10384 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0415 18:22:06.398272   10384 ssh_runner.go:195] Run: openssl version
	I0415 18:22:06.422630   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112722.pem && ln -fs /usr/share/ca-certificates/112722.pem /etc/ssl/certs/112722.pem"
	I0415 18:22:06.459842   10384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112722.pem
	I0415 18:22:06.467290   10384 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 17:58 /usr/share/ca-certificates/112722.pem
	I0415 18:22:06.480612   10384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112722.pem
	I0415 18:22:06.503535   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112722.pem /etc/ssl/certs/3ec20f2e.0"
	I0415 18:22:06.538561   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0415 18:22:06.572574   10384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:22:06.580950   10384 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:22:06.595127   10384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:22:06.618634   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0415 18:22:06.655478   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11272.pem && ln -fs /usr/share/ca-certificates/11272.pem /etc/ssl/certs/11272.pem"
	I0415 18:22:06.690402   10384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11272.pem
	I0415 18:22:06.698649   10384 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 17:58 /usr/share/ca-certificates/11272.pem
	I0415 18:22:06.712709   10384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11272.pem
	I0415 18:22:06.735899   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11272.pem /etc/ssl/certs/51391683.0"
	I0415 18:22:06.771243   10384 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 18:22:06.778754   10384 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0415 18:22:06.779215   10384 kubeadm.go:391] StartCluster: {Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clu
sterName:ha-653100 Namespace:default APIServerHAVIP:172.19.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.63.147 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 18:22:06.790653   10384 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0415 18:22:06.830974   10384 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0415 18:22:06.866829   10384 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0415 18:22:06.900593   10384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0415 18:22:06.925579   10384 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0415 18:22:06.925579   10384 kubeadm.go:156] found existing configuration files:
	
	I0415 18:22:06.940209   10384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0415 18:22:06.959148   10384 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0415 18:22:06.975145   10384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0415 18:22:07.014822   10384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0415 18:22:07.031944   10384 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0415 18:22:07.045919   10384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0415 18:22:07.081479   10384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0415 18:22:07.104063   10384 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0415 18:22:07.117753   10384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0415 18:22:07.151118   10384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0415 18:22:07.171678   10384 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0415 18:22:07.187200   10384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0415 18:22:07.206408   10384 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0415 18:22:07.712971   10384 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0415 18:22:24.172226   10384 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0415 18:22:24.172397   10384 kubeadm.go:309] [preflight] Running pre-flight checks
	I0415 18:22:24.172431   10384 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0415 18:22:24.172431   10384 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0415 18:22:24.172431   10384 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0415 18:22:24.173023   10384 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0415 18:22:24.177821   10384 out.go:204]   - Generating certificates and keys ...
	I0415 18:22:24.178357   10384 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0415 18:22:24.178482   10384 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-653100 localhost] and IPs [172.19.63.147 127.0.0.1 ::1]
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-653100 localhost] and IPs [172.19.63.147 127.0.0.1 ::1]
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0415 18:22:24.180240   10384 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0415 18:22:24.186302   10384 out.go:204]   - Booting up control plane ...
	I0415 18:22:24.187251   10384 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0415 18:22:24.188243   10384 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0415 18:22:24.188243   10384 kubeadm.go:309] [apiclient] All control plane components are healthy after 9.567962 seconds
	I0415 18:22:24.188243   10384 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0415 18:22:24.188243   10384 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0415 18:22:24.188243   10384 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0415 18:22:24.189243   10384 kubeadm.go:309] [mark-control-plane] Marking the node ha-653100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0415 18:22:24.189243   10384 kubeadm.go:309] [bootstrap-token] Using token: huvy89.hhqbdqsl75p9l7b4
	I0415 18:22:24.194248   10384 out.go:204]   - Configuring RBAC rules ...
	I0415 18:22:24.194248   10384 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0415 18:22:24.194248   10384 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0415 18:22:24.194248   10384 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0415 18:22:24.195682   10384 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0415 18:22:24.195682   10384 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0415 18:22:24.195682   10384 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0415 18:22:24.196372   10384 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0415 18:22:24.196724   10384 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0415 18:22:24.196838   10384 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0415 18:22:24.196838   10384 kubeadm.go:309] 
	I0415 18:22:24.196838   10384 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0415 18:22:24.197084   10384 kubeadm.go:309] 
	I0415 18:22:24.197248   10384 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0415 18:22:24.197248   10384 kubeadm.go:309] 
	I0415 18:22:24.197248   10384 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0415 18:22:24.197432   10384 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0415 18:22:24.197611   10384 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0415 18:22:24.197611   10384 kubeadm.go:309] 
	I0415 18:22:24.197611   10384 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0415 18:22:24.197611   10384 kubeadm.go:309] 
	I0415 18:22:24.197611   10384 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0415 18:22:24.197611   10384 kubeadm.go:309] 
	I0415 18:22:24.197611   10384 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0415 18:22:24.197611   10384 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0415 18:22:24.197611   10384 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0415 18:22:24.198307   10384 kubeadm.go:309] 
	I0415 18:22:24.198307   10384 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0415 18:22:24.198307   10384 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0415 18:22:24.198307   10384 kubeadm.go:309] 
	I0415 18:22:24.198307   10384 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token huvy89.hhqbdqsl75p9l7b4 \
	I0415 18:22:24.198307   10384 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:64433043496e1f3289bbcb278fdd01bdcea4f93c3a13feee16fa4e3db23f59c2 \
	I0415 18:22:24.198307   10384 kubeadm.go:309] 	--control-plane 
	I0415 18:22:24.198307   10384 kubeadm.go:309] 
	I0415 18:22:24.198307   10384 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0415 18:22:24.198307   10384 kubeadm.go:309] 
	I0415 18:22:24.199302   10384 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token huvy89.hhqbdqsl75p9l7b4 \
	I0415 18:22:24.199302   10384 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:64433043496e1f3289bbcb278fdd01bdcea4f93c3a13feee16fa4e3db23f59c2 
	I0415 18:22:24.199302   10384 cni.go:84] Creating CNI manager for ""
	I0415 18:22:24.199302   10384 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 18:22:24.203263   10384 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0415 18:22:24.221247   10384 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0415 18:22:24.229824   10384 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0415 18:22:24.229824   10384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0415 18:22:24.323407   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0415 18:22:25.047319   10384 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0415 18:22:25.062350   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-653100 minikube.k8s.io/updated_at=2024_04_15T18_22_25_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c minikube.k8s.io/name=ha-653100 minikube.k8s.io/primary=true
	I0415 18:22:25.062350   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:25.070326   10384 ops.go:34] apiserver oom_adj: -16
	I0415 18:22:25.284655   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:25.790456   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:26.293504   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:26.795443   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:27.298654   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:27.786190   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:28.286860   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:28.788050   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:29.292845   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:29.794080   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:30.300169   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:30.788471   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:31.295339   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:31.798627   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:32.299958   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:32.791784   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:33.289567   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:33.791349   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:34.295367   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:34.804275   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:35.290745   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:35.794796   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:36.294136   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:36.799771   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:37.029000   10384 kubeadm.go:1107] duration metric: took 11.9815852s to wait for elevateKubeSystemPrivileges
	W0415 18:22:37.029063   10384 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0415 18:22:37.029138   10384 kubeadm.go:393] duration metric: took 30.249681s to StartCluster
	I0415 18:22:37.029138   10384 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:37.029339   10384 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 18:22:37.031101   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:37.032659   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0415 18:22:37.032659   10384 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0415 18:22:37.032732   10384 addons.go:69] Setting storage-provisioner=true in profile "ha-653100"
	I0415 18:22:37.032806   10384 addons.go:234] Setting addon storage-provisioner=true in "ha-653100"
	I0415 18:22:37.032841   10384 addons.go:69] Setting default-storageclass=true in profile "ha-653100"
	I0415 18:22:37.032891   10384 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-653100"
	I0415 18:22:37.032987   10384 host.go:66] Checking if "ha-653100" exists ...
	I0415 18:22:37.032579   10384 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.19.63.147 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 18:22:37.033266   10384 start.go:240] waiting for startup goroutines ...
	I0415 18:22:37.033382   10384 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:22:37.033632   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:37.034694   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:37.253650   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0415 18:22:37.698002   10384 start.go:946] {"host.minikube.internal": 172.19.48.1} host record injected into CoreDNS's ConfigMap
	I0415 18:22:39.456374   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:39.456374   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:39.459088   10384 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 18:22:39.457089   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:39.461772   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:39.461772   10384 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 18:22:39.461772   10384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0415 18:22:39.462029   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:39.463267   10384 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 18:22:39.464063   10384 kapi.go:59] client config for ha-653100: &rest.Config{Host:"https://172.19.63.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-653100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-653100\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f71600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0415 18:22:39.466136   10384 cert_rotation.go:137] Starting client certificate rotation controller
	I0415 18:22:39.466794   10384 addons.go:234] Setting addon default-storageclass=true in "ha-653100"
	I0415 18:22:39.466794   10384 host.go:66] Checking if "ha-653100" exists ...
	I0415 18:22:39.466794   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:41.909905   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:41.909905   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:41.909905   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:22:41.955061   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:41.955061   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:41.955625   10384 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0415 18:22:41.955711   10384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0415 18:22:41.955711   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:44.377012   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:44.377012   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:44.377984   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:22:44.805425   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:22:44.805425   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:44.805425   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:22:44.974779   10384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 18:22:47.155103   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:22:47.155103   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:47.156316   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:22:47.304965   10384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0415 18:22:47.473026   10384 round_trippers.go:463] GET https://172.19.63.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0415 18:22:47.473026   10384 round_trippers.go:469] Request Headers:
	I0415 18:22:47.473026   10384 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:22:47.473026   10384 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:22:47.488496   10384 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0415 18:22:47.490408   10384 round_trippers.go:463] PUT https://172.19.63.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0415 18:22:47.490526   10384 round_trippers.go:469] Request Headers:
	I0415 18:22:47.490526   10384 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:22:47.490526   10384 round_trippers.go:473]     Content-Type: application/json
	I0415 18:22:47.490526   10384 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:22:47.494518   10384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:22:47.498597   10384 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0415 18:22:47.501457   10384 addons.go:505] duration metric: took 10.468136s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0415 18:22:47.501457   10384 start.go:245] waiting for cluster config update ...
	I0415 18:22:47.501457   10384 start.go:254] writing updated cluster config ...
	I0415 18:22:47.503984   10384 out.go:177] 
	I0415 18:22:47.513974   10384 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:22:47.513974   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:22:47.518979   10384 out.go:177] * Starting "ha-653100-m02" control-plane node in "ha-653100" cluster
	I0415 18:22:47.524981   10384 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:22:47.524981   10384 cache.go:56] Caching tarball of preloaded images
	I0415 18:22:47.526030   10384 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 18:22:47.526235   10384 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 18:22:47.526401   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:22:47.528481   10384 start.go:360] acquireMachinesLock for ha-653100-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 18:22:47.528921   10384 start.go:364] duration metric: took 121.6µs to acquireMachinesLock for "ha-653100-m02"
	I0415 18:22:47.529077   10384 start.go:93] Provisioning new machine with config: &{Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-653100 Namespace:default APIServerHAVIP:172.19.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.63.147 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 18:22:47.529280   10384 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0415 18:22:47.540485   10384 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 18:22:47.541556   10384 start.go:159] libmachine.API.Create for "ha-653100" (driver="hyperv")
	I0415 18:22:47.541556   10384 client.go:168] LocalClient.Create starting
	I0415 18:22:47.542079   10384 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0415 18:22:47.542415   10384 main.go:141] libmachine: Decoding PEM data...
	I0415 18:22:47.542415   10384 main.go:141] libmachine: Parsing certificate...
	I0415 18:22:47.542700   10384 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0415 18:22:47.542700   10384 main.go:141] libmachine: Decoding PEM data...
	I0415 18:22:47.542700   10384 main.go:141] libmachine: Parsing certificate...
	I0415 18:22:47.542700   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0415 18:22:49.574978   10384 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0415 18:22:49.574978   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:49.576110   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0415 18:22:51.479178   10384 main.go:141] libmachine: [stdout =====>] : False
	
	I0415 18:22:51.479178   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:51.479600   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 18:22:53.065829   10384 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 18:22:53.066593   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:53.066593   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 18:22:57.052062   10384 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 18:22:57.052234   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:57.055252   10384 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0415 18:22:57.583068   10384 main.go:141] libmachine: Creating SSH key...
	I0415 18:22:57.931279   10384 main.go:141] libmachine: Creating VM...
	I0415 18:22:57.931279   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 18:23:01.081349   10384 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 18:23:01.081349   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:01.082298   10384 main.go:141] libmachine: Using switch "Default Switch"
	I0415 18:23:01.082375   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 18:23:02.972464   10384 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 18:23:02.972464   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:02.972464   10384 main.go:141] libmachine: Creating VHD
	I0415 18:23:02.973018   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0415 18:23:06.989219   10384 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : DEE7E17F-5E93-468C-BA30-08390D1CA178
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0415 18:23:06.989219   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:06.989219   10384 main.go:141] libmachine: Writing magic tar header
	I0415 18:23:06.989219   10384 main.go:141] libmachine: Writing SSH key tar header
	I0415 18:23:06.990286   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0415 18:23:10.344718   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:10.344718   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:10.344872   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\disk.vhd' -SizeBytes 20000MB
	I0415 18:23:13.048066   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:13.048981   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:13.049137   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-653100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0415 18:23:17.000979   10384 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-653100-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0415 18:23:17.000979   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:17.001667   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-653100-m02 -DynamicMemoryEnabled $false
	I0415 18:23:19.529184   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:19.529184   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:19.529184   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-653100-m02 -Count 2
	I0415 18:23:21.929952   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:21.929952   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:21.930071   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-653100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\boot2docker.iso'
	I0415 18:23:24.786919   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:24.786919   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:24.786919   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-653100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\disk.vhd'
	I0415 18:23:27.665809   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:27.665809   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:27.665809   10384 main.go:141] libmachine: Starting VM...
	I0415 18:23:27.666001   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-653100-m02
	I0415 18:23:31.102209   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:31.103144   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:31.103144   10384 main.go:141] libmachine: Waiting for host to start...
	I0415 18:23:31.103144   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:33.569054   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:33.569054   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:33.569054   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:23:36.303048   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:36.303048   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:37.312865   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:39.749364   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:39.749620   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:39.749702   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:23:42.512466   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:42.512842   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:43.518477   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:45.904872   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:45.904872   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:45.905633   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:23:48.594507   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:48.594669   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:49.606615   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:51.980362   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:51.981179   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:51.981179   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:23:54.737668   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:54.738407   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:55.749257   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:58.134602   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:58.134602   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:58.135468   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:00.918915   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:00.919329   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:00.919408   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:03.202618   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:03.202618   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:03.202618   10384 machine.go:94] provisionDockerMachine start ...
	I0415 18:24:03.202618   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:05.548511   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:05.548511   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:05.549191   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:08.289644   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:08.290567   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:08.299809   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:08.300714   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:08.300714   10384 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 18:24:08.446422   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0415 18:24:08.446972   10384 buildroot.go:166] provisioning hostname "ha-653100-m02"
	I0415 18:24:08.446972   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:10.773426   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:10.773426   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:10.773426   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:13.530172   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:13.530172   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:13.536850   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:13.537708   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:13.537708   10384 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-653100-m02 && echo "ha-653100-m02" | sudo tee /etc/hostname
	I0415 18:24:13.707716   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-653100-m02
	
	I0415 18:24:13.707716   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:16.005330   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:16.005330   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:16.005330   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:18.762850   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:18.762850   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:18.770232   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:18.770901   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:18.770901   10384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-653100-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-653100-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-653100-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 18:24:18.936615   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 18:24:18.936615   10384 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0415 18:24:18.937152   10384 buildroot.go:174] setting up certificates
	I0415 18:24:18.937207   10384 provision.go:84] configureAuth start
	I0415 18:24:18.937207   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:21.299996   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:21.299996   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:21.300197   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:24.133316   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:24.134096   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:24.134153   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:26.489254   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:26.489254   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:26.489549   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:29.236160   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:29.236160   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:29.236234   10384 provision.go:143] copyHostCerts
	I0415 18:24:29.236417   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0415 18:24:29.236539   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0415 18:24:29.236539   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0415 18:24:29.237340   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0415 18:24:29.238595   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0415 18:24:29.238972   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0415 18:24:29.238972   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0415 18:24:29.239408   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0415 18:24:29.240639   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0415 18:24:29.240835   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0415 18:24:29.240835   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0415 18:24:29.241419   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0415 18:24:29.242408   10384 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-653100-m02 san=[127.0.0.1 172.19.63.104 ha-653100-m02 localhost minikube]
	I0415 18:24:29.398831   10384 provision.go:177] copyRemoteCerts
	I0415 18:24:29.412927   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 18:24:29.412927   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:31.723514   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:31.723616   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:31.723616   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:34.496654   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:34.496654   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:34.497398   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:24:34.615182   10384 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.2022138s)
	I0415 18:24:34.615182   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0415 18:24:34.615849   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 18:24:34.668445   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0415 18:24:34.668971   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0415 18:24:34.720499   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0415 18:24:34.721156   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0415 18:24:34.770381   10384 provision.go:87] duration metric: took 15.8330476s to configureAuth
	I0415 18:24:34.770381   10384 buildroot.go:189] setting minikube options for container-runtime
	I0415 18:24:34.770381   10384 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:24:34.770381   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:37.079755   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:37.080689   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:37.080689   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:39.859679   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:39.859754   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:39.866117   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:39.866820   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:39.866820   10384 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 18:24:40.015731   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0415 18:24:40.015731   10384 buildroot.go:70] root file system type: tmpfs
	I0415 18:24:40.015731   10384 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 18:24:40.015731   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:42.404944   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:42.405443   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:42.405443   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:45.210326   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:45.210813   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:45.216335   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:45.216939   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:45.216939   10384 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.63.147"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 18:24:45.394927   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.63.147
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 18:24:45.395706   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:47.711900   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:47.712499   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:47.712595   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:50.491344   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:50.491344   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:50.502173   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:50.502173   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:50.502173   10384 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 18:24:52.836243   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0415 18:24:52.836243   10384 machine.go:97] duration metric: took 49.6332282s to provisionDockerMachine
	I0415 18:24:52.836243   10384 client.go:171] duration metric: took 2m5.2936865s to LocalClient.Create
	I0415 18:24:52.836243   10384 start.go:167] duration metric: took 2m5.2936865s to libmachine.API.Create "ha-653100"
	I0415 18:24:52.836243   10384 start.go:293] postStartSetup for "ha-653100-m02" (driver="hyperv")
	I0415 18:24:52.836243   10384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 18:24:52.850899   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 18:24:52.851896   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:55.199036   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:55.199775   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:55.199775   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:58.012510   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:58.013353   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:58.013914   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:24:58.132196   10384 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.2802026s)
	I0415 18:24:58.147452   10384 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 18:24:58.154532   10384 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 18:24:58.154532   10384 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0415 18:24:58.155095   10384 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0415 18:24:58.156186   10384 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> 112722.pem in /etc/ssl/certs
	I0415 18:24:58.156186   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /etc/ssl/certs/112722.pem
	I0415 18:24:58.170256   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 18:24:58.189873   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /etc/ssl/certs/112722.pem (1708 bytes)
	I0415 18:24:58.243032   10384 start.go:296] duration metric: took 5.4067454s for postStartSetup
	I0415 18:24:58.246437   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:00.550399   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:00.550399   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:00.550894   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:03.289044   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:03.289044   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:03.289835   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:25:03.292186   10384 start.go:128] duration metric: took 2m15.7618211s to createHost
	I0415 18:25:03.292186   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:05.668753   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:05.668753   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:05.668966   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:08.439658   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:08.439658   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:08.447000   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:25:08.447864   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:25:08.447864   10384 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0415 18:25:08.589758   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713205508.597287833
	
	I0415 18:25:08.589758   10384 fix.go:216] guest clock: 1713205508.597287833
	I0415 18:25:08.589758   10384 fix.go:229] Guest: 2024-04-15 18:25:08.597287833 +0000 UTC Remote: 2024-04-15 18:25:03.2921862 +0000 UTC m=+360.055147501 (delta=5.305101633s)
	I0415 18:25:08.590328   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:10.915118   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:10.915118   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:10.916067   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:13.650013   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:13.650612   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:13.656497   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:25:13.657104   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:25:13.657182   10384 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713205508
	I0415 18:25:13.813133   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 15 18:25:08 UTC 2024
	
	I0415 18:25:13.813133   10384 fix.go:236] clock set: Mon Apr 15 18:25:08 UTC 2024
	 (err=<nil>)
	I0415 18:25:13.813133   10384 start.go:83] releasing machines lock for "ha-653100-m02", held for 2m26.2829576s
	I0415 18:25:13.813133   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:16.141194   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:16.141380   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:16.141380   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:18.957495   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:18.957495   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:18.960756   10384 out.go:177] * Found network options:
	I0415 18:25:18.964431   10384 out.go:177]   - NO_PROXY=172.19.63.147
	W0415 18:25:18.966627   10384 proxy.go:119] fail to check proxy env: Error ip not in block
	I0415 18:25:18.969406   10384 out.go:177]   - NO_PROXY=172.19.63.147
	W0415 18:25:18.972226   10384 proxy.go:119] fail to check proxy env: Error ip not in block
	W0415 18:25:18.975235   10384 proxy.go:119] fail to check proxy env: Error ip not in block
	I0415 18:25:18.977840   10384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 18:25:18.977840   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:18.990793   10384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0415 18:25:18.990793   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:21.355429   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:21.355429   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:21.355429   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:21.374654   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:21.374654   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:21.374654   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:24.278775   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:24.279572   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:24.280405   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:25:24.306668   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:24.306668   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:24.308123   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:25:24.386474   10384 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.3956377s)
	W0415 18:25:24.386474   10384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 18:25:24.404866   10384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 18:25:24.481327   10384 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 18:25:24.481327   10384 start.go:494] detecting cgroup driver to use...
	I0415 18:25:24.481327   10384 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.5034427s)
	I0415 18:25:24.481327   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:25:24.536359   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0415 18:25:24.572347   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 18:25:24.593352   10384 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 18:25:24.610729   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 18:25:24.650456   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:25:24.693297   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 18:25:24.730594   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:25:24.771078   10384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 18:25:24.812358   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 18:25:24.854948   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 18:25:24.893956   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 18:25:24.934484   10384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 18:25:24.974849   10384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 18:25:25.012928   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:25:25.269094   10384 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 18:25:25.319374   10384 start.go:494] detecting cgroup driver to use...
	I0415 18:25:25.334757   10384 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0415 18:25:25.382030   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:25:25.422509   10384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 18:25:25.496212   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:25:25.539556   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:25:25.586254   10384 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0415 18:25:25.665807   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:25:25.697619   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:25:25.754485   10384 ssh_runner.go:195] Run: which cri-dockerd
	I0415 18:25:25.776463   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0415 18:25:25.798310   10384 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0415 18:25:25.849027   10384 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0415 18:25:26.103040   10384 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0415 18:25:26.311089   10384 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0415 18:25:26.311089   10384 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0415 18:25:26.371946   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:25:26.596000   10384 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 18:26:27.765978   10384 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1694886s)
	I0415 18:26:27.781002   10384 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0415 18:26:27.817233   10384 out.go:177] 
	W0415 18:26:27.820189   10384 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 15 18:24:51 ha-653100-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 15 18:24:51 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:51.175281888Z" level=info msg="Starting up"
	Apr 15 18:24:51 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:51.176817321Z" level=info msg="containerd not running, starting managed containerd"
	Apr 15 18:24:51 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:51.181288215Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.216362257Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243075421Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243180523Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243245725Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243263625Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243358927Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243375528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243544331Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243714535Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243739035Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243751135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243859138Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.244478651Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.247680919Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.247787921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248037026Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248177629Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248295531Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248444935Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248541437Z" level=info msg="metadata content store policy set" policy=shared
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279315587Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279443690Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279651894Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279764797Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279791497Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.280197206Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.280884220Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281341330Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281485733Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281516134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281561035Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281615936Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281641736Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281663737Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281686937Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281709538Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281727638Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281747238Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281777139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281801640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281822540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281844040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281864141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281895342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282030744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282122446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282152747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282178548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282205748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282227849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282250949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282279750Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282310750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282329151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282347551Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282407752Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282432753Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282447653Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282465554Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282584456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282620757Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282637557Z" level=info msg="NRI interface is disabled by configuration."
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.283743481Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.283842283Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.283903984Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.284335093Z" level=info msg="containerd successfully booted in 0.071116s"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.254240790Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.289190582Z" level=info msg="Loading containers: start."
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.609124512Z" level=info msg="Loading containers: done."
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.636265777Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.636518080Z" level=info msg="Daemon has completed initialization"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.840822625Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.841084828Z" level=info msg="API listen on [::]:2376"
	Apr 15 18:24:52 ha-653100-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 15 18:25:26 ha-653100-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.632253775Z" level=info msg="Processing signal 'terminated'"
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.634242462Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.635132157Z" level=info msg="Daemon shutdown complete"
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.635380455Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.635547254Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 15 18:25:27 ha-653100-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 15 18:25:27 ha-653100-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 15 18:25:27 ha-653100-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 15 18:25:27 ha-653100-m02 dockerd[1016]: time="2024-04-15T18:25:27.736568730Z" level=info msg="Starting up"
	Apr 15 18:26:27 ha-653100-m02 dockerd[1016]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 15 18:26:27 ha-653100-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 15 18:26:27 ha-653100-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 15 18:26:27 ha-653100-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 15 18:24:51 ha-653100-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 15 18:24:51 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:51.175281888Z" level=info msg="Starting up"
	Apr 15 18:24:51 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:51.176817321Z" level=info msg="containerd not running, starting managed containerd"
	Apr 15 18:24:51 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:51.181288215Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.216362257Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243075421Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243180523Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243245725Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243263625Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243358927Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243375528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243544331Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243714535Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243739035Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243751135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243859138Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.244478651Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.247680919Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.247787921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248037026Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248177629Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248295531Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248444935Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248541437Z" level=info msg="metadata content store policy set" policy=shared
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279315587Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279443690Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279651894Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279764797Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279791497Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.280197206Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.280884220Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281341330Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281485733Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281516134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281561035Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281615936Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281641736Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281663737Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281686937Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281709538Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281727638Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281747238Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281777139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281801640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281822540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281844040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281864141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281895342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282030744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282122446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282152747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282178548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282205748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282227849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282250949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282279750Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282310750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282329151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282347551Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282407752Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282432753Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282447653Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282465554Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282584456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282620757Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282637557Z" level=info msg="NRI interface is disabled by configuration."
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.283743481Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.283842283Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.283903984Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.284335093Z" level=info msg="containerd successfully booted in 0.071116s"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.254240790Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.289190582Z" level=info msg="Loading containers: start."
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.609124512Z" level=info msg="Loading containers: done."
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.636265777Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.636518080Z" level=info msg="Daemon has completed initialization"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.840822625Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.841084828Z" level=info msg="API listen on [::]:2376"
	Apr 15 18:24:52 ha-653100-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 15 18:25:26 ha-653100-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.632253775Z" level=info msg="Processing signal 'terminated'"
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.634242462Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.635132157Z" level=info msg="Daemon shutdown complete"
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.635380455Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.635547254Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 15 18:25:27 ha-653100-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 15 18:25:27 ha-653100-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 15 18:25:27 ha-653100-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 15 18:25:27 ha-653100-m02 dockerd[1016]: time="2024-04-15T18:25:27.736568730Z" level=info msg="Starting up"
	Apr 15 18:26:27 ha-653100-m02 dockerd[1016]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 15 18:26:27 ha-653100-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 15 18:26:27 ha-653100-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 15 18:26:27 ha-653100-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0415 18:26:27.820189   10384 out.go:239] * 
	* 
	W0415 18:26:27.821891   10384 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 18:26:27.843940   10384 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-windows-amd64.exe start -p ha-653100 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-653100 -n ha-653100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-653100 -n ha-653100: (13.2288s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-653100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-653100 logs -n 25: (9.2291079s)
helpers_test.go:252: TestMultiControlPlane/serial/StartCluster logs: 
-- stdout --
	
	==> Audit <==
	|----------------|----------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	|    Command     |                           Args                           |      Profile      |       User        |    Version     |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	| service        | functional-831100 service                                | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:12 UTC |                     |
	|                | --namespace=default --https                              |                   |                   |                |                     |                     |
	|                | --url hello-node                                         |                   |                   |                |                     |                     |
	| image          | functional-831100 image save --daemon                    | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:12 UTC | 15 Apr 24 18:12 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-831100 |                   |                   |                |                     |                     |
	|                | --alsologtostderr                                        |                   |                   |                |                     |                     |
	| service        | functional-831100                                        | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:12 UTC |                     |
	|                | service hello-node --url                                 |                   |                   |                |                     |                     |
	|                | --format={{.IP}}                                         |                   |                   |                |                     |                     |
	| ssh            | functional-831100 ssh sudo cat                           | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:12 UTC | 15 Apr 24 18:12 UTC |
	|                | /etc/ssl/certs/11272.pem                                 |                   |                   |                |                     |                     |
	| docker-env     | functional-831100 docker-env                             | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:12 UTC | 15 Apr 24 18:12 UTC |
	| service        | functional-831100 service                                | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:12 UTC |                     |
	|                | hello-node --url                                         |                   |                   |                |                     |                     |
	| ssh            | functional-831100 ssh sudo cat                           | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:12 UTC | 15 Apr 24 18:12 UTC |
	|                | /usr/share/ca-certificates/11272.pem                     |                   |                   |                |                     |                     |
	| start          | -p functional-831100                                     | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:12 UTC |                     |
	|                | --dry-run --memory                                       |                   |                   |                |                     |                     |
	|                | 250MB --alsologtostderr                                  |                   |                   |                |                     |                     |
	|                | --driver=hyperv                                          |                   |                   |                |                     |                     |
	| ssh            | functional-831100 ssh sudo cat                           | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:12 UTC | 15 Apr 24 18:13 UTC |
	|                | /etc/ssl/certs/51391683.0                                |                   |                   |                |                     |                     |
	| dashboard      | --url --port 36195                                       | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:12 UTC |                     |
	|                | -p functional-831100                                     |                   |                   |                |                     |                     |
	|                | --alsologtostderr -v=1                                   |                   |                   |                |                     |                     |
	| ssh            | functional-831100 ssh sudo cat                           | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:13 UTC | 15 Apr 24 18:13 UTC |
	|                | /etc/ssl/certs/112722.pem                                |                   |                   |                |                     |                     |
	| image          | functional-831100                                        | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:13 UTC | 15 Apr 24 18:13 UTC |
	|                | image ls --format short                                  |                   |                   |                |                     |                     |
	|                | --alsologtostderr                                        |                   |                   |                |                     |                     |
	| image          | functional-831100                                        | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:13 UTC | 15 Apr 24 18:13 UTC |
	|                | image ls --format yaml                                   |                   |                   |                |                     |                     |
	|                | --alsologtostderr                                        |                   |                   |                |                     |                     |
	| ssh            | functional-831100 ssh sudo cat                           | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:13 UTC | 15 Apr 24 18:13 UTC |
	|                | /usr/share/ca-certificates/112722.pem                    |                   |                   |                |                     |                     |
	| ssh            | functional-831100 ssh pgrep                              | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:13 UTC |                     |
	|                | buildkitd                                                |                   |                   |                |                     |                     |
	| ssh            | functional-831100 ssh sudo cat                           | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:13 UTC | 15 Apr 24 18:13 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                |                   |                   |                |                     |                     |
	| image          | functional-831100 image build -t                         | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:13 UTC | 15 Apr 24 18:13 UTC |
	|                | localhost/my-image:functional-831100                     |                   |                   |                |                     |                     |
	|                | testdata\build --alsologtostderr                         |                   |                   |                |                     |                     |
	| image          | functional-831100                                        | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:13 UTC | 15 Apr 24 18:13 UTC |
	|                | image ls --format json                                   |                   |                   |                |                     |                     |
	|                | --alsologtostderr                                        |                   |                   |                |                     |                     |
	| image          | functional-831100                                        | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:13 UTC | 15 Apr 24 18:13 UTC |
	|                | image ls --format table                                  |                   |                   |                |                     |                     |
	|                | --alsologtostderr                                        |                   |                   |                |                     |                     |
	| update-context | functional-831100                                        | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:13 UTC | 15 Apr 24 18:13 UTC |
	|                | update-context                                           |                   |                   |                |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |                   |                |                     |                     |
	| image          | functional-831100 image ls                               | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:13 UTC | 15 Apr 24 18:13 UTC |
	| update-context | functional-831100                                        | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:13 UTC | 15 Apr 24 18:13 UTC |
	|                | update-context                                           |                   |                   |                |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |                   |                |                     |                     |
	| update-context | functional-831100                                        | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:13 UTC | 15 Apr 24 18:13 UTC |
	|                | update-context                                           |                   |                   |                |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |                   |                |                     |                     |
	| delete         | -p functional-831100                                     | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:17 UTC | 15 Apr 24 18:19 UTC |
	| start          | -p ha-653100 --wait=true                                 | ha-653100         | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:19 UTC |                     |
	|                | --memory=2200 --ha                                       |                   |                   |                |                     |                     |
	|                | -v=7 --alsologtostderr                                   |                   |                   |                |                     |                     |
	|                | --driver=hyperv                                          |                   |                   |                |                     |                     |
	|----------------|----------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 18:19:03
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 18:19:03.428900   10384 out.go:291] Setting OutFile to fd 956 ...
	I0415 18:19:03.429535   10384 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:19:03.429535   10384 out.go:304] Setting ErrFile to fd 892...
	I0415 18:19:03.429535   10384 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:19:03.456152   10384 out.go:298] Setting JSON to false
	I0415 18:19:03.460969   10384 start.go:129] hostinfo: {"hostname":"minikube6","uptime":16870,"bootTime":1713188273,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0415 18:19:03.460969   10384 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 18:19:03.468944   10384 out.go:177] * [ha-653100] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0415 18:19:03.471713   10384 notify.go:220] Checking for updates...
	I0415 18:19:03.474175   10384 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 18:19:03.479852   10384 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 18:19:03.482821   10384 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0415 18:19:03.485193   10384 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 18:19:03.488098   10384 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 18:19:03.491472   10384 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 18:19:09.177227   10384 out.go:177] * Using the hyperv driver based on user configuration
	I0415 18:19:09.180711   10384 start.go:297] selected driver: hyperv
	I0415 18:19:09.180711   10384 start.go:901] validating driver "hyperv" against <nil>
	I0415 18:19:09.180711   10384 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 18:19:09.231415   10384 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 18:19:09.233116   10384 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 18:19:09.233296   10384 cni.go:84] Creating CNI manager for ""
	I0415 18:19:09.233296   10384 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0415 18:19:09.233296   10384 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0415 18:19:09.233503   10384 start.go:340] cluster config:
	{Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-653100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 18:19:09.233896   10384 iso.go:125] acquiring lock: {Name:mkb11aac800c033551a31c7a773c0461f92e4459 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 18:19:09.237716   10384 out.go:177] * Starting "ha-653100" primary control-plane node in "ha-653100" cluster
	I0415 18:19:09.241624   10384 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:19:09.241887   10384 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 18:19:09.241939   10384 cache.go:56] Caching tarball of preloaded images
	I0415 18:19:09.242318   10384 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 18:19:09.242373   10384 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 18:19:09.243280   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:19:09.243280   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json: {Name:mk9fcf3e86096a1c3d878c2c5f55d5a5acd00e38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:19:09.244971   10384 start.go:360] acquireMachinesLock for ha-653100: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 18:19:09.244971   10384 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-653100"
	I0415 18:19:09.244971   10384 start.go:93] Provisioning new machine with config: &{Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-653100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 18:19:09.244971   10384 start.go:125] createHost starting for "" (driver="hyperv")
	I0415 18:19:09.247899   10384 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 18:19:09.247899   10384 start.go:159] libmachine.API.Create for "ha-653100" (driver="hyperv")
	I0415 18:19:09.247899   10384 client.go:168] LocalClient.Create starting
	I0415 18:19:09.248830   10384 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0415 18:19:09.249101   10384 main.go:141] libmachine: Decoding PEM data...
	I0415 18:19:09.249148   10384 main.go:141] libmachine: Parsing certificate...
	I0415 18:19:09.249148   10384 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0415 18:19:09.249148   10384 main.go:141] libmachine: Decoding PEM data...
	I0415 18:19:09.249148   10384 main.go:141] libmachine: Parsing certificate...
	I0415 18:19:09.249731   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0415 18:19:11.419777   10384 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0415 18:19:11.419777   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:11.420812   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0415 18:19:13.280108   10384 main.go:141] libmachine: [stdout =====>] : False
	
	I0415 18:19:13.280108   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:13.280637   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 18:19:14.855241   10384 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 18:19:14.855241   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:14.855241   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 18:19:18.733923   10384 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 18:19:18.734210   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:18.736243   10384 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0415 18:19:19.289879   10384 main.go:141] libmachine: Creating SSH key...
	I0415 18:19:19.400622   10384 main.go:141] libmachine: Creating VM...
	I0415 18:19:19.400622   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 18:19:22.473592   10384 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 18:19:22.473592   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:22.473592   10384 main.go:141] libmachine: Using switch "Default Switch"
	I0415 18:19:22.473592   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 18:19:24.358372   10384 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 18:19:24.358573   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:24.358573   10384 main.go:141] libmachine: Creating VHD
	I0415 18:19:24.358573   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\fixed.vhd' -SizeBytes 10MB -Fixed
	I0415 18:19:28.369440   10384 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 650E0F4D-34EC-4EE4-B011-F395B7FC2B3C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0415 18:19:28.369525   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:28.369525   10384 main.go:141] libmachine: Writing magic tar header
	I0415 18:19:28.369609   10384 main.go:141] libmachine: Writing SSH key tar header
	I0415 18:19:28.379115   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\disk.vhd' -VHDType Dynamic -DeleteSource
	I0415 18:19:31.701668   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:31.701668   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:31.702065   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\disk.vhd' -SizeBytes 20000MB
	I0415 18:19:34.409230   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:34.409287   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:34.409287   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-653100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0415 18:19:38.391213   10384 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-653100 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0415 18:19:38.391365   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:38.391448   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-653100 -DynamicMemoryEnabled $false
	I0415 18:19:40.850920   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:40.851446   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:40.851446   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-653100 -Count 2
	I0415 18:19:43.184748   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:43.185230   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:43.185314   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-653100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\boot2docker.iso'
	I0415 18:19:45.947867   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:45.947867   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:45.948906   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-653100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\disk.vhd'
	I0415 18:19:48.807697   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:48.807697   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:48.807697   10384 main.go:141] libmachine: Starting VM...
	I0415 18:19:48.808056   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-653100
	I0415 18:19:52.116173   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:52.117205   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:52.117205   10384 main.go:141] libmachine: Waiting for host to start...
	I0415 18:19:52.117276   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:19:54.557809   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:19:54.558376   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:54.558452   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:19:57.250722   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:57.250722   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:58.258291   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:00.584210   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:00.584210   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:00.584448   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:03.246620   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:20:03.247582   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:04.255962   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:06.600399   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:06.600459   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:06.600459   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:09.316612   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:20:09.316612   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:10.317022   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:12.741666   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:12.741972   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:12.742046   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:15.418020   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:20:15.418020   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:16.427460   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:18.790469   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:18.790783   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:18.790783   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:21.596566   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:21.596566   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:21.597345   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:23.951579   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:23.951579   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:23.951579   10384 machine.go:94] provisionDockerMachine start ...
	I0415 18:20:23.952606   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:26.247912   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:26.248135   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:26.248135   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:29.012297   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:29.012297   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:29.019039   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:20:29.032591   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:20:29.032673   10384 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 18:20:29.165965   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0415 18:20:29.165965   10384 buildroot.go:166] provisioning hostname "ha-653100"
	I0415 18:20:29.165965   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:31.462885   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:31.462885   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:31.462973   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:34.155427   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:34.156301   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:34.162944   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:20:34.163526   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:20:34.163526   10384 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-653100 && echo "ha-653100" | sudo tee /etc/hostname
	I0415 18:20:34.337418   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-653100
	
	I0415 18:20:34.337418   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:36.655518   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:36.655518   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:36.655812   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:39.380784   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:39.380784   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:39.389453   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:20:39.390401   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:20:39.390401   10384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-653100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-653100/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-653100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 18:20:39.543028   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 18:20:39.543028   10384 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0415 18:20:39.543028   10384 buildroot.go:174] setting up certificates
	I0415 18:20:39.543028   10384 provision.go:84] configureAuth start
	I0415 18:20:39.543611   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:41.851405   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:41.851611   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:41.851695   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:44.624640   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:44.624640   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:44.624640   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:46.878650   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:46.878650   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:46.879166   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:49.633681   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:49.633926   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:49.633926   10384 provision.go:143] copyHostCerts
	I0415 18:20:49.633926   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0415 18:20:49.634462   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0415 18:20:49.634462   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0415 18:20:49.635297   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0415 18:20:49.637549   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0415 18:20:49.637813   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0415 18:20:49.637813   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0415 18:20:49.637813   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0415 18:20:49.639233   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0415 18:20:49.639233   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0415 18:20:49.639233   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0415 18:20:49.639935   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0415 18:20:49.640957   10384 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-653100 san=[127.0.0.1 172.19.63.147 ha-653100 localhost minikube]
	I0415 18:20:49.905880   10384 provision.go:177] copyRemoteCerts
	I0415 18:20:49.922553   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 18:20:49.922553   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:52.259882   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:52.259882   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:52.259882   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:54.984473   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:54.984473   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:54.984987   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:20:55.101879   10384 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1791462s)
	I0415 18:20:55.101879   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0415 18:20:55.102059   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 18:20:55.153442   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0415 18:20:55.153917   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0415 18:20:55.199876   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0415 18:20:55.200448   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0415 18:20:55.254511   10384 provision.go:87] duration metric: took 15.7112643s to configureAuth
	I0415 18:20:55.254511   10384 buildroot.go:189] setting minikube options for container-runtime
	I0415 18:20:55.255352   10384 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:20:55.255474   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:57.547699   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:57.547699   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:57.547786   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:00.303241   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:00.303241   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:00.309852   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:00.310680   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:00.310680   10384 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 18:21:00.455641   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0415 18:21:00.455641   10384 buildroot.go:70] root file system type: tmpfs
	I0415 18:21:00.455641   10384 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 18:21:00.455641   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:02.740065   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:02.740065   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:02.740841   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:05.487209   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:05.487209   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:05.492437   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:05.493558   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:05.493558   10384 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 18:21:05.663243   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 18:21:05.663359   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:07.945804   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:07.946031   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:07.946031   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:10.668442   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:10.668442   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:10.674981   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:10.675100   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:10.675100   10384 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 18:21:12.959357   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0415 18:21:12.959357   10384 machine.go:97] duration metric: took 49.0073804s to provisionDockerMachine
	I0415 18:21:12.959357   10384 client.go:171] duration metric: took 2m3.7104605s to LocalClient.Create
	I0415 18:21:12.959357   10384 start.go:167] duration metric: took 2m3.7104605s to libmachine.API.Create "ha-653100"
	I0415 18:21:12.959357   10384 start.go:293] postStartSetup for "ha-653100" (driver="hyperv")
	I0415 18:21:12.959357   10384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 18:21:12.974666   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 18:21:12.974666   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:15.275980   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:15.275980   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:15.275980   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:18.019740   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:18.019762   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:18.019878   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:21:18.139960   10384 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1652527s)
	I0415 18:21:18.155380   10384 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 18:21:18.164559   10384 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 18:21:18.164559   10384 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0415 18:21:18.165434   10384 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0415 18:21:18.166112   10384 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> 112722.pem in /etc/ssl/certs
	I0415 18:21:18.166112   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /etc/ssl/certs/112722.pem
	I0415 18:21:18.180084   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 18:21:18.200844   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /etc/ssl/certs/112722.pem (1708 bytes)
	I0415 18:21:18.250132   10384 start.go:296] duration metric: took 5.2907331s for postStartSetup
	I0415 18:21:18.253937   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:20.531894   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:20.531894   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:20.531894   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:23.259067   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:23.259480   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:23.259754   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:21:23.262894   10384 start.go:128] duration metric: took 2m14.0167978s to createHost
	I0415 18:21:23.262950   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:25.573334   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:25.573334   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:25.573334   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:28.294984   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:28.295213   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:28.304032   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:28.304955   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:28.304955   10384 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0415 18:21:28.441121   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713205288.448859419
	
	I0415 18:21:28.441191   10384 fix.go:216] guest clock: 1713205288.448859419
	I0415 18:21:28.441191   10384 fix.go:229] Guest: 2024-04-15 18:21:28.448859419 +0000 UTC Remote: 2024-04-15 18:21:23.2629505 +0000 UTC m=+140.027670501 (delta=5.185908919s)
	I0415 18:21:28.441272   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:30.726887   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:30.727164   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:30.727164   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:33.517730   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:33.518861   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:33.525281   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:33.525856   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:33.525856   10384 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713205288
	I0415 18:21:33.684173   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 15 18:21:28 UTC 2024
	
	I0415 18:21:33.684173   10384 fix.go:236] clock set: Mon Apr 15 18:21:28 UTC 2024
	 (err=<nil>)
	I0415 18:21:33.684173   10384 start.go:83] releasing machines lock for "ha-653100", held for 2m24.4380391s
	I0415 18:21:33.684173   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:35.959004   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:35.959004   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:35.959004   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:38.693038   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:38.693586   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:38.698246   10384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 18:21:38.698432   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:38.709918   10384 ssh_runner.go:195] Run: cat /version.json
	I0415 18:21:38.709918   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:41.102451   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:43.920818   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:43.920972   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:43.921214   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:21:43.967273   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:43.967331   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:43.967331   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:21:44.091517   10384 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3921768s)
	I0415 18:21:44.091595   10384 ssh_runner.go:235] Completed: cat /version.json: (5.3815555s)
	I0415 18:21:44.105965   10384 ssh_runner.go:195] Run: systemctl --version
	I0415 18:21:44.128397   10384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0415 18:21:44.135680   10384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 18:21:44.149066   10384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 18:21:44.177790   10384 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 18:21:44.177790   10384 start.go:494] detecting cgroup driver to use...
	I0415 18:21:44.177790   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:21:44.228163   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0415 18:21:44.262529   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 18:21:44.285370   10384 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 18:21:44.301154   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 18:21:44.336472   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:21:44.370998   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 18:21:44.404889   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:21:44.438672   10384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 18:21:44.473968   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 18:21:44.507568   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 18:21:44.541278   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 18:21:44.574748   10384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 18:21:44.615798   10384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 18:21:44.656765   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:21:44.866329   10384 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 18:21:44.902355   10384 start.go:494] detecting cgroup driver to use...
	I0415 18:21:44.917364   10384 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0415 18:21:44.958576   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:21:44.995083   10384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 18:21:45.045436   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:21:45.084274   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:21:45.126708   10384 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0415 18:21:45.197837   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:21:45.224449   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:21:45.274212   10384 ssh_runner.go:195] Run: which cri-dockerd
	I0415 18:21:45.295670   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0415 18:21:45.317816   10384 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0415 18:21:45.364867   10384 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0415 18:21:45.594504   10384 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0415 18:21:45.794998   10384 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0415 18:21:45.795406   10384 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0415 18:21:45.851288   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:21:46.067106   10384 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 18:21:48.625712   10384 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5574711s)
	I0415 18:21:48.640151   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0415 18:21:48.681058   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 18:21:48.721545   10384 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0415 18:21:48.945328   10384 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0415 18:21:49.172462   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:21:49.400402   10384 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0415 18:21:49.448539   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 18:21:49.489496   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:21:49.703253   10384 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0415 18:21:49.816658   10384 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0415 18:21:49.830904   10384 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0415 18:21:49.840743   10384 start.go:562] Will wait 60s for crictl version
	I0415 18:21:49.855288   10384 ssh_runner.go:195] Run: which crictl
	I0415 18:21:49.875869   10384 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0415 18:21:49.936713   10384 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0415 18:21:49.947981   10384 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 18:21:49.993965   10384 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 18:21:50.032420   10384 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0415 18:21:50.032553   10384 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0415 18:21:50.037021   10384 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0415 18:21:50.037021   10384 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0415 18:21:50.037021   10384 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0415 18:21:50.037021   10384 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:d7:0e Flags:up|broadcast|multicast|running}
	I0415 18:21:50.039971   10384 ip.go:210] interface addr: fe80::6b0:6318:bc6e:fcda/64
	I0415 18:21:50.039971   10384 ip.go:210] interface addr: 172.19.48.1/20
	I0415 18:21:50.056064   10384 ssh_runner.go:195] Run: grep 172.19.48.1	host.minikube.internal$ /etc/hosts
	I0415 18:21:50.062649   10384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 18:21:50.097930   10384 kubeadm.go:877] updating cluster {Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:ha-653100 Namespace:default APIServerHAVIP:172.19.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.63.147 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0415 18:21:50.097930   10384 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:21:50.108473   10384 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 18:21:50.131644   10384 docker.go:685] Got preloaded images: 
	I0415 18:21:50.132600   10384 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0415 18:21:50.146104   10384 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0415 18:21:50.181885   10384 ssh_runner.go:195] Run: which lz4
	I0415 18:21:50.188111   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0415 18:21:50.202072   10384 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0415 18:21:50.209107   10384 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0415 18:21:50.209107   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0415 18:21:52.413614   10384 docker.go:649] duration metric: took 2.2254854s to copy over tarball
	I0415 18:21:52.429279   10384 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0415 18:22:01.379987   10384 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.9504893s)
	I0415 18:22:01.379987   10384 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0415 18:22:01.455511   10384 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0415 18:22:01.477182   10384 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0415 18:22:01.536289   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:22:01.768214   10384 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 18:22:04.398301   10384 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6300657s)
	I0415 18:22:04.408551   10384 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 18:22:04.433417   10384 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0415 18:22:04.433417   10384 cache_images.go:84] Images are preloaded, skipping loading
	I0415 18:22:04.433417   10384 kubeadm.go:928] updating node { 172.19.63.147 8443 v1.29.3 docker true true} ...
	I0415 18:22:04.433417   10384 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-653100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.63.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-653100 Namespace:default APIServerHAVIP:172.19.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0415 18:22:04.444220   10384 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0415 18:22:04.490342   10384 cni.go:84] Creating CNI manager for ""
	I0415 18:22:04.490402   10384 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 18:22:04.490472   10384 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0415 18:22:04.490526   10384 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.63.147 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-653100 NodeName:ha-653100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.63.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.63.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0415 18:22:04.490735   10384 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.63.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-653100"
	  kubeletExtraArgs:
	    node-ip: 172.19.63.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.63.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0415 18:22:04.490884   10384 kube-vip.go:111] generating kube-vip config ...
	I0415 18:22:04.505496   10384 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0415 18:22:04.536495   10384 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0415 18:22:04.536752   10384 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.63.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0415 18:22:04.551207   10384 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0415 18:22:04.567905   10384 binaries.go:44] Found k8s binaries, skipping transfer
	I0415 18:22:04.582348   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0415 18:22:04.604171   10384 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0415 18:22:04.646000   10384 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0415 18:22:04.692832   10384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0415 18:22:04.728604   10384 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1351 bytes)
	I0415 18:22:04.775922   10384 ssh_runner.go:195] Run: grep 172.19.63.254	control-plane.minikube.internal$ /etc/hosts
	I0415 18:22:04.783742   10384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.63.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 18:22:04.822733   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:22:05.055746   10384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 18:22:05.087598   10384 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100 for IP: 172.19.63.147
	I0415 18:22:05.087652   10384 certs.go:194] generating shared ca certs ...
	I0415 18:22:05.087652   10384 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.088303   10384 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0415 18:22:05.088303   10384 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0415 18:22:05.088915   10384 certs.go:256] generating profile certs ...
	I0415 18:22:05.089546   10384 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.key
	I0415 18:22:05.089739   10384 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.crt with IP's: []
	I0415 18:22:05.327013   10384 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.crt ...
	I0415 18:22:05.328010   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.crt: {Name:mka413e653e113856769234a348385e515e46303 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.329372   10384 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.key ...
	I0415 18:22:05.329372   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.key: {Name:mk12a79d6acd7fec5ddd98754bb23ab16e83b4f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.330112   10384 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key.c151ec5c
	I0415 18:22:05.331447   10384 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt.c151ec5c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.63.147 172.19.63.254]
	I0415 18:22:05.565428   10384 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt.c151ec5c ...
	I0415 18:22:05.565428   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt.c151ec5c: {Name:mk5c523ee813d33697660e99fb5da48b385701b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.567434   10384 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key.c151ec5c ...
	I0415 18:22:05.567434   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key.c151ec5c: {Name:mkeadeed87d8879714bf8100a4229bec1246f570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.568511   10384 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt.c151ec5c -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt
	I0415 18:22:05.585425   10384 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key.c151ec5c -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key
	I0415 18:22:05.586963   10384 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key
	I0415 18:22:05.587129   10384 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt with IP's: []
	I0415 18:22:05.748042   10384 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt ...
	I0415 18:22:05.749020   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt: {Name:mk92c7defdccaf790f51e1080d3836b064a3ba9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.749736   10384 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key ...
	I0415 18:22:05.749736   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key: {Name:mk071663552007da34f935841f25d643d746d544 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.751078   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0415 18:22:05.751078   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0415 18:22:05.751078   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0415 18:22:05.751078   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0415 18:22:05.752108   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0415 18:22:05.752265   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0415 18:22:05.752517   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0415 18:22:05.761320   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0415 18:22:05.761625   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem (1338 bytes)
	W0415 18:22:05.762397   10384 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272_empty.pem, impossibly tiny 0 bytes
	I0415 18:22:05.762397   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0415 18:22:05.762397   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0415 18:22:05.763315   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0415 18:22:05.763315   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0415 18:22:05.764136   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem (1708 bytes)
	I0415 18:22:05.764433   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /usr/share/ca-certificates/112722.pem
	I0415 18:22:05.764684   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:22:05.764840   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem -> /usr/share/ca-certificates/11272.pem
	I0415 18:22:05.766228   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0415 18:22:05.818285   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0415 18:22:05.869100   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0415 18:22:05.927943   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0415 18:22:05.982236   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0415 18:22:06.033436   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0415 18:22:06.088918   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0415 18:22:06.140228   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0415 18:22:06.194914   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /usr/share/ca-certificates/112722.pem (1708 bytes)
	I0415 18:22:06.244585   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0415 18:22:06.295695   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem --> /usr/share/ca-certificates/11272.pem (1338 bytes)
	I0415 18:22:06.348962   10384 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0415 18:22:06.398272   10384 ssh_runner.go:195] Run: openssl version
	I0415 18:22:06.422630   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112722.pem && ln -fs /usr/share/ca-certificates/112722.pem /etc/ssl/certs/112722.pem"
	I0415 18:22:06.459842   10384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112722.pem
	I0415 18:22:06.467290   10384 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 17:58 /usr/share/ca-certificates/112722.pem
	I0415 18:22:06.480612   10384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112722.pem
	I0415 18:22:06.503535   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112722.pem /etc/ssl/certs/3ec20f2e.0"
	I0415 18:22:06.538561   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0415 18:22:06.572574   10384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:22:06.580950   10384 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:22:06.595127   10384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:22:06.618634   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0415 18:22:06.655478   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11272.pem && ln -fs /usr/share/ca-certificates/11272.pem /etc/ssl/certs/11272.pem"
	I0415 18:22:06.690402   10384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11272.pem
	I0415 18:22:06.698649   10384 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 17:58 /usr/share/ca-certificates/11272.pem
	I0415 18:22:06.712709   10384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11272.pem
	I0415 18:22:06.735899   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11272.pem /etc/ssl/certs/51391683.0"
	I0415 18:22:06.771243   10384 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 18:22:06.778754   10384 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0415 18:22:06.779215   10384 kubeadm.go:391] StartCluster: {Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clu
sterName:ha-653100 Namespace:default APIServerHAVIP:172.19.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.63.147 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 18:22:06.790653   10384 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0415 18:22:06.830974   10384 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0415 18:22:06.866829   10384 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0415 18:22:06.900593   10384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0415 18:22:06.925579   10384 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0415 18:22:06.925579   10384 kubeadm.go:156] found existing configuration files:
	
	I0415 18:22:06.940209   10384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0415 18:22:06.959148   10384 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0415 18:22:06.975145   10384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0415 18:22:07.014822   10384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0415 18:22:07.031944   10384 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0415 18:22:07.045919   10384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0415 18:22:07.081479   10384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0415 18:22:07.104063   10384 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0415 18:22:07.117753   10384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0415 18:22:07.151118   10384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0415 18:22:07.171678   10384 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0415 18:22:07.187200   10384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0415 18:22:07.206408   10384 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0415 18:22:07.712971   10384 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0415 18:22:24.172226   10384 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0415 18:22:24.172397   10384 kubeadm.go:309] [preflight] Running pre-flight checks
	I0415 18:22:24.172431   10384 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0415 18:22:24.172431   10384 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0415 18:22:24.172431   10384 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0415 18:22:24.173023   10384 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0415 18:22:24.177821   10384 out.go:204]   - Generating certificates and keys ...
	I0415 18:22:24.178357   10384 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0415 18:22:24.178482   10384 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-653100 localhost] and IPs [172.19.63.147 127.0.0.1 ::1]
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-653100 localhost] and IPs [172.19.63.147 127.0.0.1 ::1]
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0415 18:22:24.180240   10384 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0415 18:22:24.186302   10384 out.go:204]   - Booting up control plane ...
	I0415 18:22:24.187251   10384 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0415 18:22:24.188243   10384 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0415 18:22:24.188243   10384 kubeadm.go:309] [apiclient] All control plane components are healthy after 9.567962 seconds
	I0415 18:22:24.188243   10384 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0415 18:22:24.188243   10384 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0415 18:22:24.188243   10384 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0415 18:22:24.189243   10384 kubeadm.go:309] [mark-control-plane] Marking the node ha-653100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0415 18:22:24.189243   10384 kubeadm.go:309] [bootstrap-token] Using token: huvy89.hhqbdqsl75p9l7b4
	I0415 18:22:24.194248   10384 out.go:204]   - Configuring RBAC rules ...
	I0415 18:22:24.194248   10384 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0415 18:22:24.194248   10384 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0415 18:22:24.194248   10384 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0415 18:22:24.195682   10384 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0415 18:22:24.195682   10384 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0415 18:22:24.195682   10384 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0415 18:22:24.196372   10384 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0415 18:22:24.196724   10384 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0415 18:22:24.196838   10384 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0415 18:22:24.196838   10384 kubeadm.go:309] 
	I0415 18:22:24.196838   10384 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0415 18:22:24.197084   10384 kubeadm.go:309] 
	I0415 18:22:24.197248   10384 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0415 18:22:24.197248   10384 kubeadm.go:309] 
	I0415 18:22:24.197248   10384 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0415 18:22:24.197432   10384 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0415 18:22:24.197611   10384 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0415 18:22:24.197611   10384 kubeadm.go:309] 
	I0415 18:22:24.197611   10384 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0415 18:22:24.197611   10384 kubeadm.go:309] 
	I0415 18:22:24.197611   10384 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0415 18:22:24.197611   10384 kubeadm.go:309] 
	I0415 18:22:24.197611   10384 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0415 18:22:24.197611   10384 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0415 18:22:24.197611   10384 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0415 18:22:24.198307   10384 kubeadm.go:309] 
	I0415 18:22:24.198307   10384 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0415 18:22:24.198307   10384 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0415 18:22:24.198307   10384 kubeadm.go:309] 
	I0415 18:22:24.198307   10384 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token huvy89.hhqbdqsl75p9l7b4 \
	I0415 18:22:24.198307   10384 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:64433043496e1f3289bbcb278fdd01bdcea4f93c3a13feee16fa4e3db23f59c2 \
	I0415 18:22:24.198307   10384 kubeadm.go:309] 	--control-plane 
	I0415 18:22:24.198307   10384 kubeadm.go:309] 
	I0415 18:22:24.198307   10384 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0415 18:22:24.198307   10384 kubeadm.go:309] 
	I0415 18:22:24.199302   10384 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token huvy89.hhqbdqsl75p9l7b4 \
	I0415 18:22:24.199302   10384 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:64433043496e1f3289bbcb278fdd01bdcea4f93c3a13feee16fa4e3db23f59c2 
	I0415 18:22:24.199302   10384 cni.go:84] Creating CNI manager for ""
	I0415 18:22:24.199302   10384 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 18:22:24.203263   10384 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0415 18:22:24.221247   10384 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0415 18:22:24.229824   10384 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0415 18:22:24.229824   10384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0415 18:22:24.323407   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0415 18:22:25.047319   10384 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0415 18:22:25.062350   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-653100 minikube.k8s.io/updated_at=2024_04_15T18_22_25_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c minikube.k8s.io/name=ha-653100 minikube.k8s.io/primary=true
	I0415 18:22:25.062350   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:25.070326   10384 ops.go:34] apiserver oom_adj: -16
	I0415 18:22:25.284655   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:25.790456   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:26.293504   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:26.795443   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:27.298654   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:27.786190   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:28.286860   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:28.788050   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:29.292845   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:29.794080   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:30.300169   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:30.788471   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:31.295339   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:31.798627   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:32.299958   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:32.791784   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:33.289567   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:33.791349   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:34.295367   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:34.804275   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:35.290745   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:35.794796   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:36.294136   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:36.799771   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:37.029000   10384 kubeadm.go:1107] duration metric: took 11.9815852s to wait for elevateKubeSystemPrivileges
	W0415 18:22:37.029063   10384 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0415 18:22:37.029138   10384 kubeadm.go:393] duration metric: took 30.249681s to StartCluster
	I0415 18:22:37.029138   10384 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:37.029339   10384 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 18:22:37.031101   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:37.032659   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0415 18:22:37.032659   10384 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0415 18:22:37.032732   10384 addons.go:69] Setting storage-provisioner=true in profile "ha-653100"
	I0415 18:22:37.032806   10384 addons.go:234] Setting addon storage-provisioner=true in "ha-653100"
	I0415 18:22:37.032841   10384 addons.go:69] Setting default-storageclass=true in profile "ha-653100"
	I0415 18:22:37.032891   10384 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-653100"
	I0415 18:22:37.032987   10384 host.go:66] Checking if "ha-653100" exists ...
	I0415 18:22:37.032579   10384 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.19.63.147 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 18:22:37.033266   10384 start.go:240] waiting for startup goroutines ...
	I0415 18:22:37.033382   10384 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:22:37.033632   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:37.034694   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:37.253650   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0415 18:22:37.698002   10384 start.go:946] {"host.minikube.internal": 172.19.48.1} host record injected into CoreDNS's ConfigMap
	I0415 18:22:39.456374   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:39.456374   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:39.459088   10384 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 18:22:39.457089   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:39.461772   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:39.461772   10384 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 18:22:39.461772   10384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0415 18:22:39.462029   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:39.463267   10384 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 18:22:39.464063   10384 kapi.go:59] client config for ha-653100: &rest.Config{Host:"https://172.19.63.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-653100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-653100\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f71600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0415 18:22:39.466136   10384 cert_rotation.go:137] Starting client certificate rotation controller
	I0415 18:22:39.466794   10384 addons.go:234] Setting addon default-storageclass=true in "ha-653100"
	I0415 18:22:39.466794   10384 host.go:66] Checking if "ha-653100" exists ...
	I0415 18:22:39.466794   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:41.909905   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:41.909905   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:41.909905   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:22:41.955061   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:41.955061   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:41.955625   10384 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0415 18:22:41.955711   10384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0415 18:22:41.955711   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:44.377012   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:44.377012   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:44.377984   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:22:44.805425   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:22:44.805425   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:44.805425   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:22:44.974779   10384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 18:22:47.155103   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:22:47.155103   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:47.156316   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:22:47.304965   10384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0415 18:22:47.473026   10384 round_trippers.go:463] GET https://172.19.63.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0415 18:22:47.473026   10384 round_trippers.go:469] Request Headers:
	I0415 18:22:47.473026   10384 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:22:47.473026   10384 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:22:47.488496   10384 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0415 18:22:47.490408   10384 round_trippers.go:463] PUT https://172.19.63.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0415 18:22:47.490526   10384 round_trippers.go:469] Request Headers:
	I0415 18:22:47.490526   10384 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:22:47.490526   10384 round_trippers.go:473]     Content-Type: application/json
	I0415 18:22:47.490526   10384 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:22:47.494518   10384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:22:47.498597   10384 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0415 18:22:47.501457   10384 addons.go:505] duration metric: took 10.468136s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0415 18:22:47.501457   10384 start.go:245] waiting for cluster config update ...
	I0415 18:22:47.501457   10384 start.go:254] writing updated cluster config ...
	I0415 18:22:47.503984   10384 out.go:177] 
	I0415 18:22:47.513974   10384 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:22:47.513974   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:22:47.518979   10384 out.go:177] * Starting "ha-653100-m02" control-plane node in "ha-653100" cluster
	I0415 18:22:47.524981   10384 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:22:47.524981   10384 cache.go:56] Caching tarball of preloaded images
	I0415 18:22:47.526030   10384 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 18:22:47.526235   10384 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 18:22:47.526401   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:22:47.528481   10384 start.go:360] acquireMachinesLock for ha-653100-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 18:22:47.528921   10384 start.go:364] duration metric: took 121.6µs to acquireMachinesLock for "ha-653100-m02"
	I0415 18:22:47.529077   10384 start.go:93] Provisioning new machine with config: &{Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-653100 Namespace:default APIServerHAVIP:172.19.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.63.147 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 18:22:47.529280   10384 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0415 18:22:47.540485   10384 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 18:22:47.541556   10384 start.go:159] libmachine.API.Create for "ha-653100" (driver="hyperv")
	I0415 18:22:47.541556   10384 client.go:168] LocalClient.Create starting
	I0415 18:22:47.542079   10384 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0415 18:22:47.542415   10384 main.go:141] libmachine: Decoding PEM data...
	I0415 18:22:47.542415   10384 main.go:141] libmachine: Parsing certificate...
	I0415 18:22:47.542700   10384 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0415 18:22:47.542700   10384 main.go:141] libmachine: Decoding PEM data...
	I0415 18:22:47.542700   10384 main.go:141] libmachine: Parsing certificate...
	I0415 18:22:47.542700   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0415 18:22:49.574978   10384 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0415 18:22:49.574978   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:49.576110   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0415 18:22:51.479178   10384 main.go:141] libmachine: [stdout =====>] : False
	
	I0415 18:22:51.479178   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:51.479600   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 18:22:53.065829   10384 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 18:22:53.066593   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:53.066593   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 18:22:57.052062   10384 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 18:22:57.052234   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:57.055252   10384 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0415 18:22:57.583068   10384 main.go:141] libmachine: Creating SSH key...
	I0415 18:22:57.931279   10384 main.go:141] libmachine: Creating VM...
	I0415 18:22:57.931279   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 18:23:01.081349   10384 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 18:23:01.081349   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:01.082298   10384 main.go:141] libmachine: Using switch "Default Switch"
	I0415 18:23:01.082375   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 18:23:02.972464   10384 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 18:23:02.972464   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:02.972464   10384 main.go:141] libmachine: Creating VHD
	I0415 18:23:02.973018   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0415 18:23:06.989219   10384 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : DEE7E17F-5E93-468C-BA30-08390D1CA178
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0415 18:23:06.989219   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:06.989219   10384 main.go:141] libmachine: Writing magic tar header
	I0415 18:23:06.989219   10384 main.go:141] libmachine: Writing SSH key tar header
	I0415 18:23:06.990286   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0415 18:23:10.344718   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:10.344718   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:10.344872   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\disk.vhd' -SizeBytes 20000MB
	I0415 18:23:13.048066   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:13.048981   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:13.049137   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-653100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0415 18:23:17.000979   10384 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-653100-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0415 18:23:17.000979   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:17.001667   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-653100-m02 -DynamicMemoryEnabled $false
	I0415 18:23:19.529184   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:19.529184   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:19.529184   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-653100-m02 -Count 2
	I0415 18:23:21.929952   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:21.929952   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:21.930071   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-653100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\boot2docker.iso'
	I0415 18:23:24.786919   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:24.786919   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:24.786919   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-653100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\disk.vhd'
	I0415 18:23:27.665809   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:27.665809   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:27.665809   10384 main.go:141] libmachine: Starting VM...
	I0415 18:23:27.666001   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-653100-m02
	I0415 18:23:31.102209   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:31.103144   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:31.103144   10384 main.go:141] libmachine: Waiting for host to start...
	I0415 18:23:31.103144   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:33.569054   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:33.569054   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:33.569054   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:23:36.303048   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:36.303048   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:37.312865   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:39.749364   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:39.749620   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:39.749702   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:23:42.512466   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:42.512842   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:43.518477   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:45.904872   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:45.904872   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:45.905633   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:23:48.594507   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:48.594669   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:49.606615   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:51.980362   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:51.981179   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:51.981179   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:23:54.737668   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:54.738407   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:55.749257   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:58.134602   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:58.134602   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:58.135468   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:00.918915   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:00.919329   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:00.919408   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:03.202618   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:03.202618   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:03.202618   10384 machine.go:94] provisionDockerMachine start ...
	I0415 18:24:03.202618   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:05.548511   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:05.548511   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:05.549191   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:08.289644   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:08.290567   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:08.299809   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:08.300714   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:08.300714   10384 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 18:24:08.446422   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0415 18:24:08.446972   10384 buildroot.go:166] provisioning hostname "ha-653100-m02"
	I0415 18:24:08.446972   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:10.773426   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:10.773426   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:10.773426   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:13.530172   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:13.530172   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:13.536850   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:13.537708   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:13.537708   10384 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-653100-m02 && echo "ha-653100-m02" | sudo tee /etc/hostname
	I0415 18:24:13.707716   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-653100-m02
	
	I0415 18:24:13.707716   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:16.005330   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:16.005330   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:16.005330   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:18.762850   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:18.762850   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:18.770232   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:18.770901   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:18.770901   10384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-653100-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-653100-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-653100-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 18:24:18.936615   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 18:24:18.936615   10384 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0415 18:24:18.937152   10384 buildroot.go:174] setting up certificates
	I0415 18:24:18.937207   10384 provision.go:84] configureAuth start
	I0415 18:24:18.937207   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:21.299996   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:21.299996   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:21.300197   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:24.133316   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:24.134096   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:24.134153   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:26.489254   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:26.489254   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:26.489549   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:29.236160   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:29.236160   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:29.236234   10384 provision.go:143] copyHostCerts
	I0415 18:24:29.236417   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0415 18:24:29.236539   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0415 18:24:29.236539   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0415 18:24:29.237340   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0415 18:24:29.238595   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0415 18:24:29.238972   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0415 18:24:29.238972   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0415 18:24:29.239408   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0415 18:24:29.240639   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0415 18:24:29.240835   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0415 18:24:29.240835   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0415 18:24:29.241419   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0415 18:24:29.242408   10384 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-653100-m02 san=[127.0.0.1 172.19.63.104 ha-653100-m02 localhost minikube]
	I0415 18:24:29.398831   10384 provision.go:177] copyRemoteCerts
	I0415 18:24:29.412927   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 18:24:29.412927   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:31.723514   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:31.723616   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:31.723616   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:34.496654   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:34.496654   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:34.497398   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:24:34.615182   10384 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.2022138s)
	I0415 18:24:34.615182   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0415 18:24:34.615849   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 18:24:34.668445   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0415 18:24:34.668971   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0415 18:24:34.720499   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0415 18:24:34.721156   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0415 18:24:34.770381   10384 provision.go:87] duration metric: took 15.8330476s to configureAuth
	I0415 18:24:34.770381   10384 buildroot.go:189] setting minikube options for container-runtime
	I0415 18:24:34.770381   10384 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:24:34.770381   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:37.079755   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:37.080689   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:37.080689   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:39.859679   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:39.859754   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:39.866117   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:39.866820   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:39.866820   10384 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 18:24:40.015731   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0415 18:24:40.015731   10384 buildroot.go:70] root file system type: tmpfs
	I0415 18:24:40.015731   10384 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 18:24:40.015731   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:42.404944   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:42.405443   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:42.405443   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:45.210326   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:45.210813   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:45.216335   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:45.216939   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:45.216939   10384 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.63.147"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 18:24:45.394927   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.63.147
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 18:24:45.395706   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:47.711900   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:47.712499   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:47.712595   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:50.491344   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:50.491344   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:50.502173   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:50.502173   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:50.502173   10384 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 18:24:52.836243   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0415 18:24:52.836243   10384 machine.go:97] duration metric: took 49.6332282s to provisionDockerMachine
	I0415 18:24:52.836243   10384 client.go:171] duration metric: took 2m5.2936865s to LocalClient.Create
	I0415 18:24:52.836243   10384 start.go:167] duration metric: took 2m5.2936865s to libmachine.API.Create "ha-653100"
	I0415 18:24:52.836243   10384 start.go:293] postStartSetup for "ha-653100-m02" (driver="hyperv")
	I0415 18:24:52.836243   10384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 18:24:52.850899   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 18:24:52.851896   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:55.199036   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:55.199775   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:55.199775   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:58.012510   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:58.013353   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:58.013914   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:24:58.132196   10384 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.2802026s)
	I0415 18:24:58.147452   10384 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 18:24:58.154532   10384 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 18:24:58.154532   10384 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0415 18:24:58.155095   10384 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0415 18:24:58.156186   10384 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> 112722.pem in /etc/ssl/certs
	I0415 18:24:58.156186   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /etc/ssl/certs/112722.pem
	I0415 18:24:58.170256   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 18:24:58.189873   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /etc/ssl/certs/112722.pem (1708 bytes)
	I0415 18:24:58.243032   10384 start.go:296] duration metric: took 5.4067454s for postStartSetup
	I0415 18:24:58.246437   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:00.550399   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:00.550399   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:00.550894   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:03.289044   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:03.289044   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:03.289835   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:25:03.292186   10384 start.go:128] duration metric: took 2m15.7618211s to createHost
	I0415 18:25:03.292186   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:05.668753   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:05.668753   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:05.668966   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:08.439658   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:08.439658   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:08.447000   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:25:08.447864   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:25:08.447864   10384 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0415 18:25:08.589758   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713205508.597287833
	
	I0415 18:25:08.589758   10384 fix.go:216] guest clock: 1713205508.597287833
	I0415 18:25:08.589758   10384 fix.go:229] Guest: 2024-04-15 18:25:08.597287833 +0000 UTC Remote: 2024-04-15 18:25:03.2921862 +0000 UTC m=+360.055147501 (delta=5.305101633s)
	I0415 18:25:08.590328   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:10.915118   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:10.915118   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:10.916067   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:13.650013   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:13.650612   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:13.656497   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:25:13.657104   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:25:13.657182   10384 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713205508
	I0415 18:25:13.813133   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 15 18:25:08 UTC 2024
	
	I0415 18:25:13.813133   10384 fix.go:236] clock set: Mon Apr 15 18:25:08 UTC 2024
	 (err=<nil>)
	I0415 18:25:13.813133   10384 start.go:83] releasing machines lock for "ha-653100-m02", held for 2m26.2829576s
	I0415 18:25:13.813133   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:16.141194   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:16.141380   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:16.141380   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:18.957495   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:18.957495   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:18.960756   10384 out.go:177] * Found network options:
	I0415 18:25:18.964431   10384 out.go:177]   - NO_PROXY=172.19.63.147
	W0415 18:25:18.966627   10384 proxy.go:119] fail to check proxy env: Error ip not in block
	I0415 18:25:18.969406   10384 out.go:177]   - NO_PROXY=172.19.63.147
	W0415 18:25:18.972226   10384 proxy.go:119] fail to check proxy env: Error ip not in block
	W0415 18:25:18.975235   10384 proxy.go:119] fail to check proxy env: Error ip not in block
	I0415 18:25:18.977840   10384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 18:25:18.977840   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:18.990793   10384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0415 18:25:18.990793   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:21.355429   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:21.355429   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:21.355429   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:21.374654   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:21.374654   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:21.374654   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:24.278775   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:24.279572   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:24.280405   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:25:24.306668   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:24.306668   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:24.308123   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:25:24.386474   10384 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.3956377s)
	W0415 18:25:24.386474   10384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 18:25:24.404866   10384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 18:25:24.481327   10384 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 18:25:24.481327   10384 start.go:494] detecting cgroup driver to use...
	I0415 18:25:24.481327   10384 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.5034427s)
	I0415 18:25:24.481327   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:25:24.536359   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0415 18:25:24.572347   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 18:25:24.593352   10384 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 18:25:24.610729   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 18:25:24.650456   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:25:24.693297   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 18:25:24.730594   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:25:24.771078   10384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 18:25:24.812358   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 18:25:24.854948   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 18:25:24.893956   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 18:25:24.934484   10384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 18:25:24.974849   10384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 18:25:25.012928   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:25:25.269094   10384 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 18:25:25.319374   10384 start.go:494] detecting cgroup driver to use...
	I0415 18:25:25.334757   10384 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0415 18:25:25.382030   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:25:25.422509   10384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 18:25:25.496212   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:25:25.539556   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:25:25.586254   10384 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0415 18:25:25.665807   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:25:25.697619   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:25:25.754485   10384 ssh_runner.go:195] Run: which cri-dockerd
	I0415 18:25:25.776463   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0415 18:25:25.798310   10384 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0415 18:25:25.849027   10384 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0415 18:25:26.103040   10384 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0415 18:25:26.311089   10384 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0415 18:25:26.311089   10384 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0415 18:25:26.371946   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:25:26.596000   10384 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 18:26:27.765978   10384 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1694886s)
	I0415 18:26:27.781002   10384 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0415 18:26:27.817233   10384 out.go:177] 
	W0415 18:26:27.820189   10384 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 15 18:24:51 ha-653100-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 15 18:24:51 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:51.175281888Z" level=info msg="Starting up"
	Apr 15 18:24:51 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:51.176817321Z" level=info msg="containerd not running, starting managed containerd"
	Apr 15 18:24:51 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:51.181288215Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.216362257Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243075421Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243180523Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243245725Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243263625Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243358927Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243375528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243544331Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243714535Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243739035Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243751135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243859138Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.244478651Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.247680919Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.247787921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248037026Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248177629Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248295531Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248444935Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248541437Z" level=info msg="metadata content store policy set" policy=shared
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279315587Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279443690Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279651894Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279764797Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279791497Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.280197206Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.280884220Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281341330Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281485733Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281516134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281561035Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281615936Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281641736Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281663737Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281686937Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281709538Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281727638Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281747238Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281777139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281801640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281822540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281844040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281864141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281895342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282030744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282122446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282152747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282178548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282205748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282227849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282250949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282279750Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282310750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282329151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282347551Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282407752Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282432753Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282447653Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282465554Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282584456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282620757Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282637557Z" level=info msg="NRI interface is disabled by configuration."
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.283743481Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.283842283Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.283903984Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.284335093Z" level=info msg="containerd successfully booted in 0.071116s"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.254240790Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.289190582Z" level=info msg="Loading containers: start."
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.609124512Z" level=info msg="Loading containers: done."
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.636265777Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.636518080Z" level=info msg="Daemon has completed initialization"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.840822625Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.841084828Z" level=info msg="API listen on [::]:2376"
	Apr 15 18:24:52 ha-653100-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 15 18:25:26 ha-653100-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.632253775Z" level=info msg="Processing signal 'terminated'"
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.634242462Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.635132157Z" level=info msg="Daemon shutdown complete"
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.635380455Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.635547254Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 15 18:25:27 ha-653100-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 15 18:25:27 ha-653100-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 15 18:25:27 ha-653100-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 15 18:25:27 ha-653100-m02 dockerd[1016]: time="2024-04-15T18:25:27.736568730Z" level=info msg="Starting up"
	Apr 15 18:26:27 ha-653100-m02 dockerd[1016]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 15 18:26:27 ha-653100-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 15 18:26:27 ha-653100-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 15 18:26:27 ha-653100-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0415 18:26:27.820189   10384 out.go:239] * 
	W0415 18:26:27.821891   10384 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 18:26:27.843940   10384 out.go:177] 
	
	
	==> Docker <==
	Apr 15 18:22:50 ha-653100 dockerd[1327]: time="2024-04-15T18:22:50.747615114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:22:50 ha-653100 dockerd[1327]: time="2024-04-15T18:22:50.748007111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:22:50 ha-653100 dockerd[1327]: time="2024-04-15T18:22:50.761077618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 15 18:22:50 ha-653100 dockerd[1327]: time="2024-04-15T18:22:50.761144217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 18:22:50 ha-653100 dockerd[1327]: time="2024-04-15T18:22:50.761234217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:22:50 ha-653100 dockerd[1327]: time="2024-04-15T18:22:50.761355416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:22:50 ha-653100 dockerd[1327]: time="2024-04-15T18:22:50.763784598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 15 18:22:50 ha-653100 dockerd[1327]: time="2024-04-15T18:22:50.763861298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 18:22:50 ha-653100 dockerd[1327]: time="2024-04-15T18:22:50.764350094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:22:50 ha-653100 dockerd[1327]: time="2024-04-15T18:22:50.764869891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:22:50 ha-653100 cri-dockerd[1226]: time="2024-04-15T18:22:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c2bc3be2dada411d1312efd7b0564955f7bdb9a0ef539aefba93c8a07aab999a/resolv.conf as [nameserver 172.19.48.1]"
	Apr 15 18:22:51 ha-653100 cri-dockerd[1226]: time="2024-04-15T18:22:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/66b040582e9fefd5142fd99fb33638d2a7c4b6457d5dff7567a0efc4c49d8fbf/resolv.conf as [nameserver 172.19.48.1]"
	Apr 15 18:22:51 ha-653100 cri-dockerd[1226]: time="2024-04-15T18:22:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/41946a72e39134d2ef4a24762b718d0ef1b3745961706f1cf01931c93ec7d880/resolv.conf as [nameserver 172.19.48.1]"
	Apr 15 18:22:51 ha-653100 dockerd[1327]: time="2024-04-15T18:22:51.315610641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 15 18:22:51 ha-653100 dockerd[1327]: time="2024-04-15T18:22:51.315837640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 18:22:51 ha-653100 dockerd[1327]: time="2024-04-15T18:22:51.315852540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:22:51 ha-653100 dockerd[1327]: time="2024-04-15T18:22:51.316126339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:22:51 ha-653100 dockerd[1327]: time="2024-04-15T18:22:51.404139244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 15 18:22:51 ha-653100 dockerd[1327]: time="2024-04-15T18:22:51.404843342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 18:22:51 ha-653100 dockerd[1327]: time="2024-04-15T18:22:51.405249640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:22:51 ha-653100 dockerd[1327]: time="2024-04-15T18:22:51.407402533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:22:51 ha-653100 dockerd[1327]: time="2024-04-15T18:22:51.494969039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 15 18:22:51 ha-653100 dockerd[1327]: time="2024-04-15T18:22:51.495221539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 18:22:51 ha-653100 dockerd[1327]: time="2024-04-15T18:22:51.495372438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:22:51 ha-653100 dockerd[1327]: time="2024-04-15T18:22:51.495812537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	58d38dcc399d7       cbb01a7bd410d                                                                                       3 minutes ago       Running             coredns                   0                   66b040582e9fe       coredns-76f75df574-hz5n2
	7f2e95849717e       6e38f40d628db                                                                                       3 minutes ago       Running             storage-provisioner       0                   41946a72e3913       storage-provisioner
	79df4cc493ccd       cbb01a7bd410d                                                                                       3 minutes ago       Running             coredns                   0                   c2bc3be2dada4       coredns-76f75df574-sw766
	8533539a42fc8       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988            4 minutes ago       Running             kindnet-cni               0                   840d4c720c681       kindnet-k8jt8
	ece5eb28b20be       a1d263b5dc5b0                                                                                       4 minutes ago       Running             kube-proxy                0                   590527a253a30       kube-proxy-dgh6m
	0cf5b602fc0c4       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016   4 minutes ago       Running             kube-vip                  0                   71c70584ee9c6       kube-vip-ha-653100
	a0697c56404b8       6052a25da3f97                                                                                       4 minutes ago       Running             kube-controller-manager   0                   5c4190df9fb18       kube-controller-manager-ha-653100
	d68da55f0f382       8c390d98f50c0                                                                                       4 minutes ago       Running             kube-scheduler            0                   92e96b6d41bb2       kube-scheduler-ha-653100
	b7958fc0d30b8       39f995c9f1996                                                                                       4 minutes ago       Running             kube-apiserver            0                   a7b3e44514ced       kube-apiserver-ha-653100
	a0fa6c17de399       3861cfcd7c04c                                                                                       4 minutes ago       Running             etcd                      0                   65fe5df3a93dd       etcd-ha-653100
	
	
	==> coredns [58d38dcc399d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e2b9de1191510a72356755223f06623b152d8cdd72ea393cca47fb3d34a5414574050e77e521fd64fc84b7e18fcd0fb5ead79ecf0a5a8be221bd0ffeb8c0080c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45845 - 8967 "HINFO IN 8354542665525626293.2689365418710486320. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.045113649s
	
	
	==> coredns [79df4cc493cc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e2b9de1191510a72356755223f06623b152d8cdd72ea393cca47fb3d34a5414574050e77e521fd64fc84b7e18fcd0fb5ead79ecf0a5a8be221bd0ffeb8c0080c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57426 - 12156 "HINFO IN 2507889984284766848.6813386495577107890. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.33687907s
	
	
	==> describe nodes <==
	Name:               ha-653100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-653100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c
	                    minikube.k8s.io/name=ha-653100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_15T18_22_25_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 18:22:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-653100
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Apr 2024 18:26:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Apr 2024 18:22:54 +0000   Mon, 15 Apr 2024 18:22:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Apr 2024 18:22:54 +0000   Mon, 15 Apr 2024 18:22:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Apr 2024 18:22:54 +0000   Mon, 15 Apr 2024 18:22:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Apr 2024 18:22:54 +0000   Mon, 15 Apr 2024 18:22:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.63.147
	  Hostname:    ha-653100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 d7ba8367096d4bf9b0e4541361a84287
	  System UUID:                64d5f641-1f2f-ce46-8918-a08d661c1258
	  Boot ID:                    994d41df-0ae9-4f39-ad28-f5e794182c63
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-hz5n2             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m13s
	  kube-system                 coredns-76f75df574-sw766             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m13s
	  kube-system                 etcd-ha-653100                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m27s
	  kube-system                 kindnet-k8jt8                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m14s
	  kube-system                 kube-apiserver-ha-653100             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 kube-controller-manager-ha-653100    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 kube-proxy-dgh6m                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  kube-system                 kube-scheduler-ha-653100             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 kube-vip-ha-653100                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m11s  kube-proxy       
	  Normal  Starting                 4m26s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m26s  kubelet          Node ha-653100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m26s  kubelet          Node ha-653100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m26s  kubelet          Node ha-653100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m26s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m14s  node-controller  Node ha-653100 event: Registered Node ha-653100 in Controller
	  Normal  NodeReady                4m1s   kubelet          Node ha-653100 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.735783] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +2.084698] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.376265] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr15 18:21] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.209937] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[ +33.615481] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.104388] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.615924] systemd-fstab-generator[983]: Ignoring "noauto" option for root device
	[  +0.216331] systemd-fstab-generator[995]: Ignoring "noauto" option for root device
	[  +0.260985] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[  +2.876807] systemd-fstab-generator[1179]: Ignoring "noauto" option for root device
	[  +0.212935] systemd-fstab-generator[1191]: Ignoring "noauto" option for root device
	[  +0.227831] systemd-fstab-generator[1203]: Ignoring "noauto" option for root device
	[  +0.311128] systemd-fstab-generator[1218]: Ignoring "noauto" option for root device
	[Apr15 18:22] systemd-fstab-generator[1312]: Ignoring "noauto" option for root device
	[  +0.114802] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.164512] systemd-fstab-generator[1517]: Ignoring "noauto" option for root device
	[  +7.677617] systemd-fstab-generator[1722]: Ignoring "noauto" option for root device
	[  +0.108322] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.774902] kauditd_printk_skb: 67 callbacks suppressed
	[  +5.244487] systemd-fstab-generator[2220]: Ignoring "noauto" option for root device
	[ +14.155639] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.279744] kauditd_printk_skb: 29 callbacks suppressed
	
	
	==> etcd [a0fa6c17de39] <==
	{"level":"info","ts":"2024-04-15T18:22:15.502977Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-15T18:22:15.503104Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.19.63.147:2380"}
	{"level":"info","ts":"2024-04-15T18:22:15.503128Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.19.63.147:2380"}
	{"level":"info","ts":"2024-04-15T18:22:16.370495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"87419fc5adebc62c is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-15T18:22:16.371859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"87419fc5adebc62c became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-15T18:22:16.371914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"87419fc5adebc62c received MsgPreVoteResp from 87419fc5adebc62c at term 1"}
	{"level":"info","ts":"2024-04-15T18:22:16.371929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"87419fc5adebc62c became candidate at term 2"}
	{"level":"info","ts":"2024-04-15T18:22:16.371936Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"87419fc5adebc62c received MsgVoteResp from 87419fc5adebc62c at term 2"}
	{"level":"info","ts":"2024-04-15T18:22:16.371947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"87419fc5adebc62c became leader at term 2"}
	{"level":"info","ts":"2024-04-15T18:22:16.371957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 87419fc5adebc62c elected leader 87419fc5adebc62c at term 2"}
	{"level":"info","ts":"2024-04-15T18:22:16.379846Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"87419fc5adebc62c","local-member-attributes":"{Name:ha-653100 ClientURLs:[https://172.19.63.147:2379]}","request-path":"/0/members/87419fc5adebc62c/attributes","cluster-id":"877b68dea54e79ed","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-15T18:22:16.380125Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T18:22:16.381521Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-15T18:22:16.38789Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-15T18:22:16.387913Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-15T18:22:16.388265Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-15T18:22:16.392534Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"877b68dea54e79ed","local-member-id":"87419fc5adebc62c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T18:22:16.392806Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T18:22:16.393093Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T18:22:16.397755Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-15T18:22:16.407773Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.63.147:2379"}
	{"level":"warn","ts":"2024-04-15T18:22:40.147381Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.732834ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:434"}
	{"level":"info","ts":"2024-04-15T18:22:40.147471Z","caller":"traceutil/trace.go:171","msg":"trace[1597021408] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:383; }","duration":"127.883033ms","start":"2024-04-15T18:22:40.019571Z","end":"2024-04-15T18:22:40.147455Z","steps":["trace[1597021408] 'range keys from in-memory index tree'  (duration: 127.546236ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:22:59.717501Z","caller":"traceutil/trace.go:171","msg":"trace[1969279315] transaction","detail":"{read_only:false; response_revision:463; number_of_response:1; }","duration":"139.509436ms","start":"2024-04-15T18:22:59.57797Z","end":"2024-04-15T18:22:59.71748Z","steps":["trace[1969279315] 'process raft request'  (duration: 139.406534ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:23:00.52288Z","caller":"traceutil/trace.go:171","msg":"trace[385091820] transaction","detail":"{read_only:false; response_revision:465; number_of_response:1; }","duration":"121.870177ms","start":"2024-04-15T18:23:00.400991Z","end":"2024-04-15T18:23:00.522861Z","steps":["trace[385091820] 'process raft request'  (duration: 121.627673ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:26:50 up 6 min,  0 users,  load average: 0.69, 0.69, 0.31
	Linux ha-653100 5.10.207 #1 SMP Mon Apr 15 15:01:07 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8533539a42fc] <==
	I0415 18:24:46.381836       1 main.go:227] handling current node
	I0415 18:24:56.393577       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:24:56.393905       1 main.go:227] handling current node
	I0415 18:25:06.408479       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:25:06.408613       1 main.go:227] handling current node
	I0415 18:25:16.416254       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:25:16.416774       1 main.go:227] handling current node
	I0415 18:25:26.428925       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:25:26.429673       1 main.go:227] handling current node
	I0415 18:25:36.440586       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:25:36.440633       1 main.go:227] handling current node
	I0415 18:25:46.450906       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:25:46.451018       1 main.go:227] handling current node
	I0415 18:25:56.459415       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:25:56.459527       1 main.go:227] handling current node
	I0415 18:26:06.468533       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:26:06.468672       1 main.go:227] handling current node
	I0415 18:26:16.483666       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:26:16.483899       1 main.go:227] handling current node
	I0415 18:26:26.491376       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:26:26.491470       1 main.go:227] handling current node
	I0415 18:26:36.498941       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:26:36.499078       1 main.go:227] handling current node
	I0415 18:26:46.504655       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:26:46.504944       1 main.go:227] handling current node
	
	
	==> kube-apiserver [b7958fc0d30b] <==
	I0415 18:22:19.461381       1 controller.go:624] quota admission added evaluator for: namespaces
	I0415 18:22:19.468850       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0415 18:22:19.469135       1 shared_informer.go:318] Caches are synced for configmaps
	I0415 18:22:19.471941       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0415 18:22:19.472253       1 aggregator.go:165] initial CRD sync complete...
	I0415 18:22:19.472339       1 autoregister_controller.go:141] Starting autoregister controller
	I0415 18:22:19.472452       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0415 18:22:19.472462       1 cache.go:39] Caches are synced for autoregister controller
	I0415 18:22:19.498049       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0415 18:22:19.510348       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0415 18:22:20.354035       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0415 18:22:20.363724       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0415 18:22:20.363838       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0415 18:22:21.763949       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0415 18:22:21.866542       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0415 18:22:22.100224       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0415 18:22:22.118571       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.63.147]
	I0415 18:22:22.120605       1 controller.go:624] quota admission added evaluator for: endpoints
	I0415 18:22:22.130952       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0415 18:22:22.385516       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0415 18:22:24.016138       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0415 18:22:24.048032       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0415 18:22:24.081226       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0415 18:22:36.868875       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0415 18:22:36.898745       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [a0697c56404b] <==
	I0415 18:22:36.828148       1 shared_informer.go:318] Caches are synced for disruption
	I0415 18:22:36.828607       1 shared_informer.go:318] Caches are synced for resource quota
	I0415 18:22:36.830505       1 shared_informer.go:318] Caches are synced for attach detach
	I0415 18:22:36.836300       1 shared_informer.go:318] Caches are synced for persistent volume
	I0415 18:22:36.958383       1 event.go:376] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-76f75df574 to 2"
	I0415 18:22:37.002786       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-k8jt8"
	I0415 18:22:37.002844       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-dgh6m"
	I0415 18:22:37.136677       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-sw766"
	I0415 18:22:37.196138       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-hz5n2"
	I0415 18:22:37.244882       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="305.132073ms"
	I0415 18:22:37.265766       1 shared_informer.go:318] Caches are synced for garbage collector
	I0415 18:22:37.284118       1 shared_informer.go:318] Caches are synced for garbage collector
	I0415 18:22:37.284549       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0415 18:22:37.317722       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="72.247257ms"
	I0415 18:22:37.317794       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="39.6µs"
	I0415 18:22:49.963920       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="429.397µs"
	I0415 18:22:49.975545       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="117.899µs"
	I0415 18:22:50.009242       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="326.398µs"
	I0415 18:22:50.048064       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="90.899µs"
	I0415 18:22:51.764868       1 node_lifecycle_controller.go:1045] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0415 18:22:52.188891       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="82.302µs"
	I0415 18:22:52.287692       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="37.246165ms"
	I0415 18:22:52.288055       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="224.505µs"
	I0415 18:22:52.333123       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="22.028652ms"
	I0415 18:22:52.333675       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="426.909µs"
	
	
	==> kube-proxy [ece5eb28b20b] <==
	I0415 18:22:38.391716       1 server_others.go:72] "Using iptables proxy"
	I0415 18:22:38.407680       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.63.147"]
	I0415 18:22:38.495319       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0415 18:22:38.495346       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0415 18:22:38.495361       1 server_others.go:168] "Using iptables Proxier"
	I0415 18:22:38.500785       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0415 18:22:38.501443       1 server.go:865] "Version info" version="v1.29.3"
	I0415 18:22:38.501468       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0415 18:22:38.503945       1 config.go:188] "Starting service config controller"
	I0415 18:22:38.504041       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0415 18:22:38.504268       1 config.go:97] "Starting endpoint slice config controller"
	I0415 18:22:38.504770       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0415 18:22:38.505829       1 config.go:315] "Starting node config controller"
	I0415 18:22:38.507970       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0415 18:22:38.605316       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0415 18:22:38.605583       1 shared_informer.go:318] Caches are synced for service config
	I0415 18:22:38.608238       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [d68da55f0f38] <==
	W0415 18:22:20.533571       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0415 18:22:20.533671       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0415 18:22:20.559089       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0415 18:22:20.559148       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0415 18:22:20.566941       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0415 18:22:20.569271       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0415 18:22:20.649432       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0415 18:22:20.649545       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0415 18:22:20.680518       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0415 18:22:20.681133       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0415 18:22:20.703015       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0415 18:22:20.703474       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0415 18:22:20.766338       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0415 18:22:20.766458       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0415 18:22:20.789649       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0415 18:22:20.790593       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0415 18:22:20.803334       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0415 18:22:20.804054       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0415 18:22:20.808728       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0415 18:22:20.809130       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0415 18:22:20.838937       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0415 18:22:20.841219       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0415 18:22:20.865287       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0415 18:22:20.865345       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0415 18:22:22.187395       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 15 18:22:51 ha-653100 kubelet[2226]: I0415 18:22:51.074101    2226 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41946a72e39134d2ef4a24762b718d0ef1b3745961706f1cf01931c93ec7d880"
	Apr 15 18:22:51 ha-653100 kubelet[2226]: I0415 18:22:51.113686    2226 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2bc3be2dada411d1312efd7b0564955f7bdb9a0ef539aefba93c8a07aab999a"
	Apr 15 18:22:51 ha-653100 kubelet[2226]: I0415 18:22:51.119775    2226 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66b040582e9fefd5142fd99fb33638d2a7c4b6457d5dff7567a0efc4c49d8fbf"
	Apr 15 18:22:52 ha-653100 kubelet[2226]: I0415 18:22:52.221852    2226 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-hz5n2" podStartSLOduration=15.221800588 podStartE2EDuration="15.221800588s" podCreationTimestamp="2024-04-15 18:22:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-15 18:22:52.185446241 +0000 UTC m=+28.223528471" watchObservedRunningTime="2024-04-15 18:22:52.221800588 +0000 UTC m=+28.259882818"
	Apr 15 18:22:52 ha-653100 kubelet[2226]: I0415 18:22:52.247987    2226 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=7.247937525 podStartE2EDuration="7.247937525s" podCreationTimestamp="2024-04-15 18:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-15 18:22:52.224070335 +0000 UTC m=+28.262152565" watchObservedRunningTime="2024-04-15 18:22:52.247937525 +0000 UTC m=+28.286019755"
	Apr 15 18:23:24 ha-653100 kubelet[2226]: E0415 18:23:24.243767    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:23:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:23:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:23:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:23:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 18:24:24 ha-653100 kubelet[2226]: E0415 18:24:24.245379    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:24:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:24:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:24:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:24:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 18:25:24 ha-653100 kubelet[2226]: E0415 18:25:24.244781    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:25:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:25:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:25:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:25:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 18:26:24 ha-653100 kubelet[2226]: E0415 18:26:24.244279    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:26:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:26:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:26:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:26:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [7f2e95849717] <==
	I0415 18:22:51.745766       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0415 18:22:51.775039       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0415 18:22:51.776486       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0415 18:22:51.796625       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0415 18:22:51.797264       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-653100_25d3e2ad-9ea0-4e78-8d19-2cecacd07855!
	I0415 18:22:51.798087       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6b2abca4-b232-44be-91ab-d881b60cfa0a", APIVersion:"v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-653100_25d3e2ad-9ea0-4e78-8d19-2cecacd07855 became leader
	I0415 18:22:51.899439       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-653100_25d3e2ad-9ea0-4e78-8d19-2cecacd07855!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:26:41.863032    3552 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-653100 -n ha-653100
E0415 18:26:53.549182   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-653100 -n ha-653100: (13.2167947s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-653100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StartCluster (481.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (759.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-653100 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-653100 -- rollout status deployment/busybox
E0415 18:30:10.503128   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
E0415 18:31:53.560219   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
E0415 18:35:10.517914   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
E0415 18:36:33.713734   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
E0415 18:36:53.561906   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
ha_test.go:133: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-653100 -- rollout status deployment/busybox: exit status 1 (10m4.2888287s)

                                                
                                                
-- stdout --
	Waiting for deployment "busybox" rollout to finish: 0 of 3 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 1 of 3 updated replicas are available...

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:27:05.908311   11448 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: deployment "busybox" exceeded its progress deadline

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-653100 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0415 18:37:10.189431    2520 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-653100 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0415 18:37:11.562133   13080 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-653100 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0415 18:37:13.045445    2548 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-653100 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0415 18:37:16.808064    3908 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-653100 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0415 18:37:22.143307    5588 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-653100 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0415 18:37:27.437991    5680 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-653100 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0415 18:37:35.952811   14012 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-653100 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0415 18:37:42.365800    9040 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-653100 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0415 18:38:01.991692    7624 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-653100 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0415 18:38:39.100400    5752 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-653100 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0415 18:39:00.347668    3908 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:159: failed to resolve pod IPs: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0415 18:39:00.347668    3908 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-653100 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-653100 -- exec busybox-7fdf7869d9-5w5x4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-653100 -- exec busybox-7fdf7869d9-5w5x4 -- nslookup kubernetes.io: (2.2422017s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-653100 -- exec busybox-7fdf7869d9-8pgjv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-653100 -- exec busybox-7fdf7869d9-8pgjv -- nslookup kubernetes.io: exit status 1 (448.3303ms)

                                                
                                                
** stderr ** 
	W0415 18:39:03.546891    8384 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-7fdf7869d9-8pgjv does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:173: Pod busybox-7fdf7869d9-8pgjv could not resolve 'kubernetes.io': exit status 1
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-653100 -- exec busybox-7fdf7869d9-tk6sh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-653100 -- exec busybox-7fdf7869d9-tk6sh -- nslookup kubernetes.io: exit status 1 (470.3151ms)

                                                
                                                
** stderr ** 
	W0415 18:39:04.016481   13440 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-7fdf7869d9-tk6sh does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:173: Pod busybox-7fdf7869d9-tk6sh could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-653100 -- exec busybox-7fdf7869d9-5w5x4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-653100 -- exec busybox-7fdf7869d9-8pgjv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-653100 -- exec busybox-7fdf7869d9-8pgjv -- nslookup kubernetes.default: exit status 1 (442.4846ms)

                                                
                                                
** stderr ** 
	W0415 18:39:05.092227   11184 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-7fdf7869d9-8pgjv does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:183: Pod busybox-7fdf7869d9-8pgjv could not resolve 'kubernetes.default': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-653100 -- exec busybox-7fdf7869d9-tk6sh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-653100 -- exec busybox-7fdf7869d9-tk6sh -- nslookup kubernetes.default: exit status 1 (440.6884ms)

                                                
                                                
** stderr ** 
	W0415 18:39:05.530915    3200 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-7fdf7869d9-tk6sh does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:183: Pod busybox-7fdf7869d9-tk6sh could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-653100 -- exec busybox-7fdf7869d9-5w5x4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-653100 -- exec busybox-7fdf7869d9-8pgjv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-653100 -- exec busybox-7fdf7869d9-8pgjv -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (466.1905ms)

                                                
                                                
** stderr ** 
	W0415 18:39:06.548951    6164 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-7fdf7869d9-8pgjv does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:191: Pod busybox-7fdf7869d9-8pgjv could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-653100 -- exec busybox-7fdf7869d9-tk6sh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-653100 -- exec busybox-7fdf7869d9-tk6sh -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (456.0386ms)

                                                
                                                
** stderr ** 
	W0415 18:39:07.024245   10164 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-7fdf7869d9-tk6sh does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:191: Pod busybox-7fdf7869d9-tk6sh could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-653100 -n ha-653100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-653100 -n ha-653100: (13.5018729s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-653100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-653100 logs -n 25: (9.2363962s)
helpers_test.go:252: TestMultiControlPlane/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	| delete  | -p functional-831100                 | functional-831100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:17 UTC | 15 Apr 24 18:19 UTC |
	| start   | -p ha-653100 --wait=true             | ha-653100         | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:19 UTC |                     |
	|         | --memory=2200 --ha                   |                   |                   |                |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |                |                     |                     |
	|         | --driver=hyperv                      |                   |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- apply -f             | ha-653100         | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:27 UTC | 15 Apr 24 18:27 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- rollout status       | ha-653100         | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:27 UTC |                     |
	|         | deployment/busybox                   |                   |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100         | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100         | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100         | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100         | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100         | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100         | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100         | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100         | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100         | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:38 UTC | 15 Apr 24 18:38 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100         | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:38 UTC | 15 Apr 24 18:38 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100         | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100         | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100         | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | busybox-7fdf7869d9-5w5x4 --          |                   |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100         | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-8pgjv --          |                   |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100         | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-tk6sh --          |                   |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100         | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | busybox-7fdf7869d9-5w5x4 --          |                   |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100         | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-8pgjv --          |                   |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100         | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-tk6sh --          |                   |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100         | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | busybox-7fdf7869d9-5w5x4 -- nslookup |                   |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100         | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-8pgjv -- nslookup |                   |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100         | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-tk6sh -- nslookup |                   |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |                |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 18:19:03
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 18:19:03.428900   10384 out.go:291] Setting OutFile to fd 956 ...
	I0415 18:19:03.429535   10384 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:19:03.429535   10384 out.go:304] Setting ErrFile to fd 892...
	I0415 18:19:03.429535   10384 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:19:03.456152   10384 out.go:298] Setting JSON to false
	I0415 18:19:03.460969   10384 start.go:129] hostinfo: {"hostname":"minikube6","uptime":16870,"bootTime":1713188273,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0415 18:19:03.460969   10384 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 18:19:03.468944   10384 out.go:177] * [ha-653100] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0415 18:19:03.471713   10384 notify.go:220] Checking for updates...
	I0415 18:19:03.474175   10384 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 18:19:03.479852   10384 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 18:19:03.482821   10384 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0415 18:19:03.485193   10384 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 18:19:03.488098   10384 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 18:19:03.491472   10384 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 18:19:09.177227   10384 out.go:177] * Using the hyperv driver based on user configuration
	I0415 18:19:09.180711   10384 start.go:297] selected driver: hyperv
	I0415 18:19:09.180711   10384 start.go:901] validating driver "hyperv" against <nil>
	I0415 18:19:09.180711   10384 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 18:19:09.231415   10384 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 18:19:09.233116   10384 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 18:19:09.233296   10384 cni.go:84] Creating CNI manager for ""
	I0415 18:19:09.233296   10384 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0415 18:19:09.233296   10384 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0415 18:19:09.233503   10384 start.go:340] cluster config:
	{Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-653100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 18:19:09.233896   10384 iso.go:125] acquiring lock: {Name:mkb11aac800c033551a31c7a773c0461f92e4459 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 18:19:09.237716   10384 out.go:177] * Starting "ha-653100" primary control-plane node in "ha-653100" cluster
	I0415 18:19:09.241624   10384 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:19:09.241887   10384 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 18:19:09.241939   10384 cache.go:56] Caching tarball of preloaded images
	I0415 18:19:09.242318   10384 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 18:19:09.242373   10384 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 18:19:09.243280   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:19:09.243280   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json: {Name:mk9fcf3e86096a1c3d878c2c5f55d5a5acd00e38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:19:09.244971   10384 start.go:360] acquireMachinesLock for ha-653100: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 18:19:09.244971   10384 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-653100"
	I0415 18:19:09.244971   10384 start.go:93] Provisioning new machine with config: &{Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-653100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 18:19:09.244971   10384 start.go:125] createHost starting for "" (driver="hyperv")
	I0415 18:19:09.247899   10384 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 18:19:09.247899   10384 start.go:159] libmachine.API.Create for "ha-653100" (driver="hyperv")
	I0415 18:19:09.247899   10384 client.go:168] LocalClient.Create starting
	I0415 18:19:09.248830   10384 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0415 18:19:09.249101   10384 main.go:141] libmachine: Decoding PEM data...
	I0415 18:19:09.249148   10384 main.go:141] libmachine: Parsing certificate...
	I0415 18:19:09.249148   10384 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0415 18:19:09.249148   10384 main.go:141] libmachine: Decoding PEM data...
	I0415 18:19:09.249148   10384 main.go:141] libmachine: Parsing certificate...
	I0415 18:19:09.249731   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0415 18:19:11.419777   10384 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0415 18:19:11.419777   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:11.420812   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0415 18:19:13.280108   10384 main.go:141] libmachine: [stdout =====>] : False
	
	I0415 18:19:13.280108   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:13.280637   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 18:19:14.855241   10384 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 18:19:14.855241   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:14.855241   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 18:19:18.733923   10384 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 18:19:18.734210   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:18.736243   10384 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0415 18:19:19.289879   10384 main.go:141] libmachine: Creating SSH key...
	I0415 18:19:19.400622   10384 main.go:141] libmachine: Creating VM...
	I0415 18:19:19.400622   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 18:19:22.473592   10384 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 18:19:22.473592   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:22.473592   10384 main.go:141] libmachine: Using switch "Default Switch"
	I0415 18:19:22.473592   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 18:19:24.358372   10384 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 18:19:24.358573   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:24.358573   10384 main.go:141] libmachine: Creating VHD
	I0415 18:19:24.358573   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\fixed.vhd' -SizeBytes 10MB -Fixed
	I0415 18:19:28.369440   10384 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 650E0F4D-34EC-4EE4-B011-F395B7FC2B3C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0415 18:19:28.369525   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:28.369525   10384 main.go:141] libmachine: Writing magic tar header
	I0415 18:19:28.369609   10384 main.go:141] libmachine: Writing SSH key tar header
	I0415 18:19:28.379115   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\disk.vhd' -VHDType Dynamic -DeleteSource
	I0415 18:19:31.701668   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:31.701668   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:31.702065   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\disk.vhd' -SizeBytes 20000MB
	I0415 18:19:34.409230   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:34.409287   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:34.409287   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-653100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0415 18:19:38.391213   10384 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-653100 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0415 18:19:38.391365   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:38.391448   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-653100 -DynamicMemoryEnabled $false
	I0415 18:19:40.850920   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:40.851446   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:40.851446   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-653100 -Count 2
	I0415 18:19:43.184748   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:43.185230   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:43.185314   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-653100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\boot2docker.iso'
	I0415 18:19:45.947867   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:45.947867   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:45.948906   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-653100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\disk.vhd'
	I0415 18:19:48.807697   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:48.807697   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:48.807697   10384 main.go:141] libmachine: Starting VM...
	I0415 18:19:48.808056   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-653100
	I0415 18:19:52.116173   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:52.117205   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:52.117205   10384 main.go:141] libmachine: Waiting for host to start...
	I0415 18:19:52.117276   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:19:54.557809   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:19:54.558376   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:54.558452   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:19:57.250722   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:57.250722   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:58.258291   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:00.584210   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:00.584210   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:00.584448   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:03.246620   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:20:03.247582   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:04.255962   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:06.600399   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:06.600459   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:06.600459   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:09.316612   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:20:09.316612   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:10.317022   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:12.741666   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:12.741972   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:12.742046   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:15.418020   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:20:15.418020   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:16.427460   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:18.790469   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:18.790783   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:18.790783   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:21.596566   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:21.596566   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:21.597345   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:23.951579   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:23.951579   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:23.951579   10384 machine.go:94] provisionDockerMachine start ...
	I0415 18:20:23.952606   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:26.247912   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:26.248135   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:26.248135   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:29.012297   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:29.012297   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:29.019039   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:20:29.032591   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:20:29.032673   10384 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 18:20:29.165965   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0415 18:20:29.165965   10384 buildroot.go:166] provisioning hostname "ha-653100"
	I0415 18:20:29.165965   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:31.462885   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:31.462885   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:31.462973   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:34.155427   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:34.156301   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:34.162944   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:20:34.163526   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:20:34.163526   10384 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-653100 && echo "ha-653100" | sudo tee /etc/hostname
	I0415 18:20:34.337418   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-653100
	
	I0415 18:20:34.337418   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:36.655518   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:36.655518   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:36.655812   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:39.380784   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:39.380784   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:39.389453   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:20:39.390401   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:20:39.390401   10384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-653100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-653100/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-653100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 18:20:39.543028   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 18:20:39.543028   10384 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0415 18:20:39.543028   10384 buildroot.go:174] setting up certificates
	I0415 18:20:39.543028   10384 provision.go:84] configureAuth start
	I0415 18:20:39.543611   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:41.851405   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:41.851611   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:41.851695   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:44.624640   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:44.624640   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:44.624640   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:46.878650   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:46.878650   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:46.879166   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:49.633681   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:49.633926   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:49.633926   10384 provision.go:143] copyHostCerts
	I0415 18:20:49.633926   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0415 18:20:49.634462   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0415 18:20:49.634462   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0415 18:20:49.635297   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0415 18:20:49.637549   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0415 18:20:49.637813   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0415 18:20:49.637813   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0415 18:20:49.637813   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0415 18:20:49.639233   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0415 18:20:49.639233   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0415 18:20:49.639233   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0415 18:20:49.639935   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0415 18:20:49.640957   10384 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-653100 san=[127.0.0.1 172.19.63.147 ha-653100 localhost minikube]
	I0415 18:20:49.905880   10384 provision.go:177] copyRemoteCerts
	I0415 18:20:49.922553   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 18:20:49.922553   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:52.259882   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:52.259882   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:52.259882   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:54.984473   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:54.984473   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:54.984987   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:20:55.101879   10384 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1791462s)
	I0415 18:20:55.101879   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0415 18:20:55.102059   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 18:20:55.153442   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0415 18:20:55.153917   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0415 18:20:55.199876   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0415 18:20:55.200448   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0415 18:20:55.254511   10384 provision.go:87] duration metric: took 15.7112643s to configureAuth
	I0415 18:20:55.254511   10384 buildroot.go:189] setting minikube options for container-runtime
	I0415 18:20:55.255352   10384 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:20:55.255474   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:57.547699   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:57.547699   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:57.547786   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:00.303241   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:00.303241   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:00.309852   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:00.310680   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:00.310680   10384 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 18:21:00.455641   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0415 18:21:00.455641   10384 buildroot.go:70] root file system type: tmpfs
	I0415 18:21:00.455641   10384 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 18:21:00.455641   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:02.740065   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:02.740065   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:02.740841   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:05.487209   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:05.487209   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:05.492437   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:05.493558   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:05.493558   10384 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 18:21:05.663243   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 18:21:05.663359   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:07.945804   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:07.946031   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:07.946031   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:10.668442   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:10.668442   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:10.674981   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:10.675100   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:10.675100   10384 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 18:21:12.959357   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0415 18:21:12.959357   10384 machine.go:97] duration metric: took 49.0073804s to provisionDockerMachine
	I0415 18:21:12.959357   10384 client.go:171] duration metric: took 2m3.7104605s to LocalClient.Create
	I0415 18:21:12.959357   10384 start.go:167] duration metric: took 2m3.7104605s to libmachine.API.Create "ha-653100"
	I0415 18:21:12.959357   10384 start.go:293] postStartSetup for "ha-653100" (driver="hyperv")
	I0415 18:21:12.959357   10384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 18:21:12.974666   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 18:21:12.974666   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:15.275980   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:15.275980   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:15.275980   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:18.019740   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:18.019762   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:18.019878   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:21:18.139960   10384 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1652527s)
	I0415 18:21:18.155380   10384 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 18:21:18.164559   10384 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 18:21:18.164559   10384 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0415 18:21:18.165434   10384 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0415 18:21:18.166112   10384 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> 112722.pem in /etc/ssl/certs
	I0415 18:21:18.166112   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /etc/ssl/certs/112722.pem
	I0415 18:21:18.180084   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 18:21:18.200844   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /etc/ssl/certs/112722.pem (1708 bytes)
	I0415 18:21:18.250132   10384 start.go:296] duration metric: took 5.2907331s for postStartSetup
	I0415 18:21:18.253937   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:20.531894   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:20.531894   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:20.531894   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:23.259067   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:23.259480   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:23.259754   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:21:23.262894   10384 start.go:128] duration metric: took 2m14.0167978s to createHost
	I0415 18:21:23.262950   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:25.573334   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:25.573334   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:25.573334   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:28.294984   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:28.295213   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:28.304032   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:28.304955   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:28.304955   10384 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0415 18:21:28.441121   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713205288.448859419
	
	I0415 18:21:28.441191   10384 fix.go:216] guest clock: 1713205288.448859419
	I0415 18:21:28.441191   10384 fix.go:229] Guest: 2024-04-15 18:21:28.448859419 +0000 UTC Remote: 2024-04-15 18:21:23.2629505 +0000 UTC m=+140.027670501 (delta=5.185908919s)
	I0415 18:21:28.441272   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:30.726887   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:30.727164   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:30.727164   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:33.517730   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:33.518861   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:33.525281   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:33.525856   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:33.525856   10384 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713205288
	I0415 18:21:33.684173   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 15 18:21:28 UTC 2024
	
	I0415 18:21:33.684173   10384 fix.go:236] clock set: Mon Apr 15 18:21:28 UTC 2024
	 (err=<nil>)
	I0415 18:21:33.684173   10384 start.go:83] releasing machines lock for "ha-653100", held for 2m24.4380391s
	I0415 18:21:33.684173   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:35.959004   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:35.959004   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:35.959004   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:38.693038   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:38.693586   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:38.698246   10384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 18:21:38.698432   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:38.709918   10384 ssh_runner.go:195] Run: cat /version.json
	I0415 18:21:38.709918   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:41.102451   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:43.920818   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:43.920972   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:43.921214   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:21:43.967273   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:43.967331   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:43.967331   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:21:44.091517   10384 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3921768s)
	I0415 18:21:44.091595   10384 ssh_runner.go:235] Completed: cat /version.json: (5.3815555s)
	I0415 18:21:44.105965   10384 ssh_runner.go:195] Run: systemctl --version
	I0415 18:21:44.128397   10384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0415 18:21:44.135680   10384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 18:21:44.149066   10384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 18:21:44.177790   10384 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 18:21:44.177790   10384 start.go:494] detecting cgroup driver to use...
	I0415 18:21:44.177790   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:21:44.228163   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0415 18:21:44.262529   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 18:21:44.285370   10384 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 18:21:44.301154   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 18:21:44.336472   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:21:44.370998   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 18:21:44.404889   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:21:44.438672   10384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 18:21:44.473968   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 18:21:44.507568   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 18:21:44.541278   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 18:21:44.574748   10384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 18:21:44.615798   10384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 18:21:44.656765   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:21:44.866329   10384 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 18:21:44.902355   10384 start.go:494] detecting cgroup driver to use...
	I0415 18:21:44.917364   10384 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0415 18:21:44.958576   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:21:44.995083   10384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 18:21:45.045436   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:21:45.084274   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:21:45.126708   10384 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0415 18:21:45.197837   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:21:45.224449   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:21:45.274212   10384 ssh_runner.go:195] Run: which cri-dockerd
	I0415 18:21:45.295670   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0415 18:21:45.317816   10384 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0415 18:21:45.364867   10384 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0415 18:21:45.594504   10384 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0415 18:21:45.794998   10384 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0415 18:21:45.795406   10384 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0415 18:21:45.851288   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:21:46.067106   10384 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 18:21:48.625712   10384 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5574711s)
	I0415 18:21:48.640151   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0415 18:21:48.681058   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 18:21:48.721545   10384 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0415 18:21:48.945328   10384 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0415 18:21:49.172462   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:21:49.400402   10384 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0415 18:21:49.448539   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 18:21:49.489496   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:21:49.703253   10384 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0415 18:21:49.816658   10384 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0415 18:21:49.830904   10384 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0415 18:21:49.840743   10384 start.go:562] Will wait 60s for crictl version
	I0415 18:21:49.855288   10384 ssh_runner.go:195] Run: which crictl
	I0415 18:21:49.875869   10384 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0415 18:21:49.936713   10384 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0415 18:21:49.947981   10384 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 18:21:49.993965   10384 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 18:21:50.032420   10384 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0415 18:21:50.032553   10384 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0415 18:21:50.037021   10384 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0415 18:21:50.037021   10384 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0415 18:21:50.037021   10384 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0415 18:21:50.037021   10384 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:d7:0e Flags:up|broadcast|multicast|running}
	I0415 18:21:50.039971   10384 ip.go:210] interface addr: fe80::6b0:6318:bc6e:fcda/64
	I0415 18:21:50.039971   10384 ip.go:210] interface addr: 172.19.48.1/20
	I0415 18:21:50.056064   10384 ssh_runner.go:195] Run: grep 172.19.48.1	host.minikube.internal$ /etc/hosts
	I0415 18:21:50.062649   10384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 18:21:50.097930   10384 kubeadm.go:877] updating cluster {Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:ha-653100 Namespace:default APIServerHAVIP:172.19.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.63.147 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0415 18:21:50.097930   10384 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:21:50.108473   10384 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 18:21:50.131644   10384 docker.go:685] Got preloaded images: 
	I0415 18:21:50.132600   10384 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0415 18:21:50.146104   10384 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0415 18:21:50.181885   10384 ssh_runner.go:195] Run: which lz4
	I0415 18:21:50.188111   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0415 18:21:50.202072   10384 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0415 18:21:50.209107   10384 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0415 18:21:50.209107   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0415 18:21:52.413614   10384 docker.go:649] duration metric: took 2.2254854s to copy over tarball
	I0415 18:21:52.429279   10384 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0415 18:22:01.379987   10384 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.9504893s)
	I0415 18:22:01.379987   10384 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0415 18:22:01.455511   10384 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0415 18:22:01.477182   10384 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0415 18:22:01.536289   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:22:01.768214   10384 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 18:22:04.398301   10384 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6300657s)
	I0415 18:22:04.408551   10384 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 18:22:04.433417   10384 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0415 18:22:04.433417   10384 cache_images.go:84] Images are preloaded, skipping loading
	I0415 18:22:04.433417   10384 kubeadm.go:928] updating node { 172.19.63.147 8443 v1.29.3 docker true true} ...
	I0415 18:22:04.433417   10384 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-653100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.63.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-653100 Namespace:default APIServerHAVIP:172.19.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0415 18:22:04.444220   10384 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0415 18:22:04.490342   10384 cni.go:84] Creating CNI manager for ""
	I0415 18:22:04.490402   10384 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 18:22:04.490472   10384 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0415 18:22:04.490526   10384 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.63.147 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-653100 NodeName:ha-653100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.63.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.63.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0415 18:22:04.490735   10384 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.63.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-653100"
	  kubeletExtraArgs:
	    node-ip: 172.19.63.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.63.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0415 18:22:04.490884   10384 kube-vip.go:111] generating kube-vip config ...
	I0415 18:22:04.505496   10384 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0415 18:22:04.536495   10384 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0415 18:22:04.536752   10384 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.63.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0415 18:22:04.551207   10384 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0415 18:22:04.567905   10384 binaries.go:44] Found k8s binaries, skipping transfer
	I0415 18:22:04.582348   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0415 18:22:04.604171   10384 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0415 18:22:04.646000   10384 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0415 18:22:04.692832   10384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0415 18:22:04.728604   10384 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1351 bytes)
	I0415 18:22:04.775922   10384 ssh_runner.go:195] Run: grep 172.19.63.254	control-plane.minikube.internal$ /etc/hosts
	I0415 18:22:04.783742   10384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.63.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 18:22:04.822733   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:22:05.055746   10384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 18:22:05.087598   10384 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100 for IP: 172.19.63.147
	I0415 18:22:05.087652   10384 certs.go:194] generating shared ca certs ...
	I0415 18:22:05.087652   10384 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.088303   10384 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0415 18:22:05.088303   10384 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0415 18:22:05.088915   10384 certs.go:256] generating profile certs ...
	I0415 18:22:05.089546   10384 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.key
	I0415 18:22:05.089739   10384 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.crt with IP's: []
	I0415 18:22:05.327013   10384 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.crt ...
	I0415 18:22:05.328010   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.crt: {Name:mka413e653e113856769234a348385e515e46303 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.329372   10384 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.key ...
	I0415 18:22:05.329372   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.key: {Name:mk12a79d6acd7fec5ddd98754bb23ab16e83b4f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.330112   10384 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key.c151ec5c
	I0415 18:22:05.331447   10384 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt.c151ec5c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.63.147 172.19.63.254]
	I0415 18:22:05.565428   10384 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt.c151ec5c ...
	I0415 18:22:05.565428   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt.c151ec5c: {Name:mk5c523ee813d33697660e99fb5da48b385701b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.567434   10384 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key.c151ec5c ...
	I0415 18:22:05.567434   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key.c151ec5c: {Name:mkeadeed87d8879714bf8100a4229bec1246f570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.568511   10384 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt.c151ec5c -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt
	I0415 18:22:05.585425   10384 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key.c151ec5c -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key
	I0415 18:22:05.586963   10384 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key
	I0415 18:22:05.587129   10384 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt with IP's: []
	I0415 18:22:05.748042   10384 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt ...
	I0415 18:22:05.749020   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt: {Name:mk92c7defdccaf790f51e1080d3836b064a3ba9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.749736   10384 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key ...
	I0415 18:22:05.749736   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key: {Name:mk071663552007da34f935841f25d643d746d544 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.751078   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0415 18:22:05.751078   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0415 18:22:05.751078   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0415 18:22:05.751078   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0415 18:22:05.752108   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0415 18:22:05.752265   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0415 18:22:05.752517   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0415 18:22:05.761320   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0415 18:22:05.761625   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem (1338 bytes)
	W0415 18:22:05.762397   10384 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272_empty.pem, impossibly tiny 0 bytes
	I0415 18:22:05.762397   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0415 18:22:05.762397   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0415 18:22:05.763315   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0415 18:22:05.763315   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0415 18:22:05.764136   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem (1708 bytes)
	I0415 18:22:05.764433   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /usr/share/ca-certificates/112722.pem
	I0415 18:22:05.764684   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:22:05.764840   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem -> /usr/share/ca-certificates/11272.pem
	I0415 18:22:05.766228   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0415 18:22:05.818285   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0415 18:22:05.869100   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0415 18:22:05.927943   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0415 18:22:05.982236   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0415 18:22:06.033436   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0415 18:22:06.088918   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0415 18:22:06.140228   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0415 18:22:06.194914   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /usr/share/ca-certificates/112722.pem (1708 bytes)
	I0415 18:22:06.244585   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0415 18:22:06.295695   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem --> /usr/share/ca-certificates/11272.pem (1338 bytes)
	I0415 18:22:06.348962   10384 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0415 18:22:06.398272   10384 ssh_runner.go:195] Run: openssl version
	I0415 18:22:06.422630   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112722.pem && ln -fs /usr/share/ca-certificates/112722.pem /etc/ssl/certs/112722.pem"
	I0415 18:22:06.459842   10384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112722.pem
	I0415 18:22:06.467290   10384 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 17:58 /usr/share/ca-certificates/112722.pem
	I0415 18:22:06.480612   10384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112722.pem
	I0415 18:22:06.503535   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112722.pem /etc/ssl/certs/3ec20f2e.0"
	I0415 18:22:06.538561   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0415 18:22:06.572574   10384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:22:06.580950   10384 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:22:06.595127   10384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:22:06.618634   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0415 18:22:06.655478   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11272.pem && ln -fs /usr/share/ca-certificates/11272.pem /etc/ssl/certs/11272.pem"
	I0415 18:22:06.690402   10384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11272.pem
	I0415 18:22:06.698649   10384 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 17:58 /usr/share/ca-certificates/11272.pem
	I0415 18:22:06.712709   10384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11272.pem
	I0415 18:22:06.735899   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11272.pem /etc/ssl/certs/51391683.0"
	I0415 18:22:06.771243   10384 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 18:22:06.778754   10384 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0415 18:22:06.779215   10384 kubeadm.go:391] StartCluster: {Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clu
sterName:ha-653100 Namespace:default APIServerHAVIP:172.19.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.63.147 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 18:22:06.790653   10384 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0415 18:22:06.830974   10384 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0415 18:22:06.866829   10384 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0415 18:22:06.900593   10384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0415 18:22:06.925579   10384 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0415 18:22:06.925579   10384 kubeadm.go:156] found existing configuration files:
	
	I0415 18:22:06.940209   10384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0415 18:22:06.959148   10384 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0415 18:22:06.975145   10384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0415 18:22:07.014822   10384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0415 18:22:07.031944   10384 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0415 18:22:07.045919   10384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0415 18:22:07.081479   10384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0415 18:22:07.104063   10384 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0415 18:22:07.117753   10384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0415 18:22:07.151118   10384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0415 18:22:07.171678   10384 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0415 18:22:07.187200   10384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0415 18:22:07.206408   10384 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0415 18:22:07.712971   10384 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0415 18:22:24.172226   10384 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0415 18:22:24.172397   10384 kubeadm.go:309] [preflight] Running pre-flight checks
	I0415 18:22:24.172431   10384 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0415 18:22:24.172431   10384 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0415 18:22:24.172431   10384 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0415 18:22:24.173023   10384 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0415 18:22:24.177821   10384 out.go:204]   - Generating certificates and keys ...
	I0415 18:22:24.178357   10384 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0415 18:22:24.178482   10384 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-653100 localhost] and IPs [172.19.63.147 127.0.0.1 ::1]
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-653100 localhost] and IPs [172.19.63.147 127.0.0.1 ::1]
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0415 18:22:24.180240   10384 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0415 18:22:24.186302   10384 out.go:204]   - Booting up control plane ...
	I0415 18:22:24.187251   10384 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0415 18:22:24.188243   10384 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0415 18:22:24.188243   10384 kubeadm.go:309] [apiclient] All control plane components are healthy after 9.567962 seconds
	I0415 18:22:24.188243   10384 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0415 18:22:24.188243   10384 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0415 18:22:24.188243   10384 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0415 18:22:24.189243   10384 kubeadm.go:309] [mark-control-plane] Marking the node ha-653100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0415 18:22:24.189243   10384 kubeadm.go:309] [bootstrap-token] Using token: huvy89.hhqbdqsl75p9l7b4
	I0415 18:22:24.194248   10384 out.go:204]   - Configuring RBAC rules ...
	I0415 18:22:24.194248   10384 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0415 18:22:24.194248   10384 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0415 18:22:24.194248   10384 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0415 18:22:24.195682   10384 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0415 18:22:24.195682   10384 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0415 18:22:24.195682   10384 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0415 18:22:24.196372   10384 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0415 18:22:24.196724   10384 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0415 18:22:24.196838   10384 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0415 18:22:24.196838   10384 kubeadm.go:309] 
	I0415 18:22:24.196838   10384 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0415 18:22:24.197084   10384 kubeadm.go:309] 
	I0415 18:22:24.197248   10384 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0415 18:22:24.197248   10384 kubeadm.go:309] 
	I0415 18:22:24.197248   10384 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0415 18:22:24.197432   10384 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0415 18:22:24.197611   10384 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0415 18:22:24.197611   10384 kubeadm.go:309] 
	I0415 18:22:24.197611   10384 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0415 18:22:24.197611   10384 kubeadm.go:309] 
	I0415 18:22:24.197611   10384 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0415 18:22:24.197611   10384 kubeadm.go:309] 
	I0415 18:22:24.197611   10384 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0415 18:22:24.197611   10384 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0415 18:22:24.197611   10384 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0415 18:22:24.198307   10384 kubeadm.go:309] 
	I0415 18:22:24.198307   10384 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0415 18:22:24.198307   10384 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0415 18:22:24.198307   10384 kubeadm.go:309] 
	I0415 18:22:24.198307   10384 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token huvy89.hhqbdqsl75p9l7b4 \
	I0415 18:22:24.198307   10384 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:64433043496e1f3289bbcb278fdd01bdcea4f93c3a13feee16fa4e3db23f59c2 \
	I0415 18:22:24.198307   10384 kubeadm.go:309] 	--control-plane 
	I0415 18:22:24.198307   10384 kubeadm.go:309] 
	I0415 18:22:24.198307   10384 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0415 18:22:24.198307   10384 kubeadm.go:309] 
	I0415 18:22:24.199302   10384 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token huvy89.hhqbdqsl75p9l7b4 \
	I0415 18:22:24.199302   10384 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:64433043496e1f3289bbcb278fdd01bdcea4f93c3a13feee16fa4e3db23f59c2 
	I0415 18:22:24.199302   10384 cni.go:84] Creating CNI manager for ""
	I0415 18:22:24.199302   10384 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 18:22:24.203263   10384 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0415 18:22:24.221247   10384 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0415 18:22:24.229824   10384 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0415 18:22:24.229824   10384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0415 18:22:24.323407   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0415 18:22:25.047319   10384 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0415 18:22:25.062350   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-653100 minikube.k8s.io/updated_at=2024_04_15T18_22_25_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c minikube.k8s.io/name=ha-653100 minikube.k8s.io/primary=true
	I0415 18:22:25.062350   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:25.070326   10384 ops.go:34] apiserver oom_adj: -16
	I0415 18:22:25.284655   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:25.790456   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:26.293504   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:26.795443   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:27.298654   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:27.786190   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:28.286860   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:28.788050   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:29.292845   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:29.794080   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:30.300169   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:30.788471   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:31.295339   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:31.798627   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:32.299958   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:32.791784   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:33.289567   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:33.791349   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:34.295367   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:34.804275   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:35.290745   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:35.794796   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:36.294136   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:36.799771   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:37.029000   10384 kubeadm.go:1107] duration metric: took 11.9815852s to wait for elevateKubeSystemPrivileges
	W0415 18:22:37.029063   10384 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0415 18:22:37.029138   10384 kubeadm.go:393] duration metric: took 30.249681s to StartCluster
	I0415 18:22:37.029138   10384 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:37.029339   10384 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 18:22:37.031101   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:37.032659   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0415 18:22:37.032659   10384 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0415 18:22:37.032732   10384 addons.go:69] Setting storage-provisioner=true in profile "ha-653100"
	I0415 18:22:37.032806   10384 addons.go:234] Setting addon storage-provisioner=true in "ha-653100"
	I0415 18:22:37.032841   10384 addons.go:69] Setting default-storageclass=true in profile "ha-653100"
	I0415 18:22:37.032891   10384 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-653100"
	I0415 18:22:37.032987   10384 host.go:66] Checking if "ha-653100" exists ...
	I0415 18:22:37.032579   10384 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.19.63.147 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 18:22:37.033266   10384 start.go:240] waiting for startup goroutines ...
	I0415 18:22:37.033382   10384 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:22:37.033632   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:37.034694   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:37.253650   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0415 18:22:37.698002   10384 start.go:946] {"host.minikube.internal": 172.19.48.1} host record injected into CoreDNS's ConfigMap
	I0415 18:22:39.456374   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:39.456374   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:39.459088   10384 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 18:22:39.457089   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:39.461772   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:39.461772   10384 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 18:22:39.461772   10384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0415 18:22:39.462029   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:39.463267   10384 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 18:22:39.464063   10384 kapi.go:59] client config for ha-653100: &rest.Config{Host:"https://172.19.63.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-653100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-653100\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f71600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0415 18:22:39.466136   10384 cert_rotation.go:137] Starting client certificate rotation controller
	I0415 18:22:39.466794   10384 addons.go:234] Setting addon default-storageclass=true in "ha-653100"
	I0415 18:22:39.466794   10384 host.go:66] Checking if "ha-653100" exists ...
	I0415 18:22:39.466794   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:41.909905   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:41.909905   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:41.909905   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:22:41.955061   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:41.955061   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:41.955625   10384 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0415 18:22:41.955711   10384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0415 18:22:41.955711   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:44.377012   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:44.377012   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:44.377984   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:22:44.805425   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:22:44.805425   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:44.805425   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:22:44.974779   10384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 18:22:47.155103   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:22:47.155103   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:47.156316   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:22:47.304965   10384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0415 18:22:47.473026   10384 round_trippers.go:463] GET https://172.19.63.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0415 18:22:47.473026   10384 round_trippers.go:469] Request Headers:
	I0415 18:22:47.473026   10384 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:22:47.473026   10384 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:22:47.488496   10384 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0415 18:22:47.490408   10384 round_trippers.go:463] PUT https://172.19.63.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0415 18:22:47.490526   10384 round_trippers.go:469] Request Headers:
	I0415 18:22:47.490526   10384 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:22:47.490526   10384 round_trippers.go:473]     Content-Type: application/json
	I0415 18:22:47.490526   10384 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:22:47.494518   10384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:22:47.498597   10384 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0415 18:22:47.501457   10384 addons.go:505] duration metric: took 10.468136s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0415 18:22:47.501457   10384 start.go:245] waiting for cluster config update ...
	I0415 18:22:47.501457   10384 start.go:254] writing updated cluster config ...
	I0415 18:22:47.503984   10384 out.go:177] 
	I0415 18:22:47.513974   10384 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:22:47.513974   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:22:47.518979   10384 out.go:177] * Starting "ha-653100-m02" control-plane node in "ha-653100" cluster
	I0415 18:22:47.524981   10384 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:22:47.524981   10384 cache.go:56] Caching tarball of preloaded images
	I0415 18:22:47.526030   10384 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 18:22:47.526235   10384 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 18:22:47.526401   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:22:47.528481   10384 start.go:360] acquireMachinesLock for ha-653100-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 18:22:47.528921   10384 start.go:364] duration metric: took 121.6µs to acquireMachinesLock for "ha-653100-m02"
	I0415 18:22:47.529077   10384 start.go:93] Provisioning new machine with config: &{Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-653100 Namespace:default APIServerHAVIP:172.19.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.63.147 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 18:22:47.529280   10384 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0415 18:22:47.540485   10384 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 18:22:47.541556   10384 start.go:159] libmachine.API.Create for "ha-653100" (driver="hyperv")
	I0415 18:22:47.541556   10384 client.go:168] LocalClient.Create starting
	I0415 18:22:47.542079   10384 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0415 18:22:47.542415   10384 main.go:141] libmachine: Decoding PEM data...
	I0415 18:22:47.542415   10384 main.go:141] libmachine: Parsing certificate...
	I0415 18:22:47.542700   10384 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0415 18:22:47.542700   10384 main.go:141] libmachine: Decoding PEM data...
	I0415 18:22:47.542700   10384 main.go:141] libmachine: Parsing certificate...
	I0415 18:22:47.542700   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0415 18:22:49.574978   10384 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0415 18:22:49.574978   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:49.576110   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0415 18:22:51.479178   10384 main.go:141] libmachine: [stdout =====>] : False
	
	I0415 18:22:51.479178   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:51.479600   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 18:22:53.065829   10384 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 18:22:53.066593   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:53.066593   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 18:22:57.052062   10384 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 18:22:57.052234   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:57.055252   10384 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0415 18:22:57.583068   10384 main.go:141] libmachine: Creating SSH key...
	I0415 18:22:57.931279   10384 main.go:141] libmachine: Creating VM...
	I0415 18:22:57.931279   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 18:23:01.081349   10384 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 18:23:01.081349   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:01.082298   10384 main.go:141] libmachine: Using switch "Default Switch"
	I0415 18:23:01.082375   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 18:23:02.972464   10384 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 18:23:02.972464   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:02.972464   10384 main.go:141] libmachine: Creating VHD
	I0415 18:23:02.973018   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0415 18:23:06.989219   10384 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : DEE7E17F-5E93-468C-BA30-08390D1CA178
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0415 18:23:06.989219   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:06.989219   10384 main.go:141] libmachine: Writing magic tar header
	I0415 18:23:06.989219   10384 main.go:141] libmachine: Writing SSH key tar header
	I0415 18:23:06.990286   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0415 18:23:10.344718   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:10.344718   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:10.344872   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\disk.vhd' -SizeBytes 20000MB
	I0415 18:23:13.048066   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:13.048981   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:13.049137   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-653100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0415 18:23:17.000979   10384 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-653100-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0415 18:23:17.000979   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:17.001667   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-653100-m02 -DynamicMemoryEnabled $false
	I0415 18:23:19.529184   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:19.529184   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:19.529184   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-653100-m02 -Count 2
	I0415 18:23:21.929952   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:21.929952   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:21.930071   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-653100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\boot2docker.iso'
	I0415 18:23:24.786919   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:24.786919   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:24.786919   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-653100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\disk.vhd'
	I0415 18:23:27.665809   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:27.665809   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:27.665809   10384 main.go:141] libmachine: Starting VM...
	I0415 18:23:27.666001   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-653100-m02
	I0415 18:23:31.102209   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:31.103144   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:31.103144   10384 main.go:141] libmachine: Waiting for host to start...
	I0415 18:23:31.103144   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:33.569054   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:33.569054   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:33.569054   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:23:36.303048   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:36.303048   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:37.312865   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:39.749364   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:39.749620   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:39.749702   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:23:42.512466   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:42.512842   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:43.518477   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:45.904872   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:45.904872   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:45.905633   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:23:48.594507   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:48.594669   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:49.606615   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:51.980362   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:51.981179   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:51.981179   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:23:54.737668   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:54.738407   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:55.749257   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:58.134602   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:58.134602   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:58.135468   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:00.918915   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:00.919329   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:00.919408   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:03.202618   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:03.202618   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:03.202618   10384 machine.go:94] provisionDockerMachine start ...
	I0415 18:24:03.202618   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:05.548511   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:05.548511   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:05.549191   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:08.289644   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:08.290567   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:08.299809   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:08.300714   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:08.300714   10384 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 18:24:08.446422   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0415 18:24:08.446972   10384 buildroot.go:166] provisioning hostname "ha-653100-m02"
	I0415 18:24:08.446972   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:10.773426   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:10.773426   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:10.773426   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:13.530172   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:13.530172   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:13.536850   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:13.537708   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:13.537708   10384 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-653100-m02 && echo "ha-653100-m02" | sudo tee /etc/hostname
	I0415 18:24:13.707716   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-653100-m02
	
	I0415 18:24:13.707716   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:16.005330   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:16.005330   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:16.005330   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:18.762850   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:18.762850   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:18.770232   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:18.770901   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:18.770901   10384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-653100-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-653100-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-653100-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 18:24:18.936615   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 18:24:18.936615   10384 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0415 18:24:18.937152   10384 buildroot.go:174] setting up certificates
	I0415 18:24:18.937207   10384 provision.go:84] configureAuth start
	I0415 18:24:18.937207   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:21.299996   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:21.299996   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:21.300197   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:24.133316   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:24.134096   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:24.134153   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:26.489254   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:26.489254   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:26.489549   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:29.236160   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:29.236160   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:29.236234   10384 provision.go:143] copyHostCerts
	I0415 18:24:29.236417   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0415 18:24:29.236539   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0415 18:24:29.236539   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0415 18:24:29.237340   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0415 18:24:29.238595   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0415 18:24:29.238972   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0415 18:24:29.238972   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0415 18:24:29.239408   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0415 18:24:29.240639   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0415 18:24:29.240835   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0415 18:24:29.240835   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0415 18:24:29.241419   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0415 18:24:29.242408   10384 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-653100-m02 san=[127.0.0.1 172.19.63.104 ha-653100-m02 localhost minikube]
	I0415 18:24:29.398831   10384 provision.go:177] copyRemoteCerts
	I0415 18:24:29.412927   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 18:24:29.412927   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:31.723514   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:31.723616   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:31.723616   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:34.496654   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:34.496654   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:34.497398   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:24:34.615182   10384 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.2022138s)
	I0415 18:24:34.615182   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0415 18:24:34.615849   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 18:24:34.668445   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0415 18:24:34.668971   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0415 18:24:34.720499   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0415 18:24:34.721156   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0415 18:24:34.770381   10384 provision.go:87] duration metric: took 15.8330476s to configureAuth
	I0415 18:24:34.770381   10384 buildroot.go:189] setting minikube options for container-runtime
	I0415 18:24:34.770381   10384 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:24:34.770381   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:37.079755   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:37.080689   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:37.080689   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:39.859679   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:39.859754   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:39.866117   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:39.866820   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:39.866820   10384 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 18:24:40.015731   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0415 18:24:40.015731   10384 buildroot.go:70] root file system type: tmpfs
	I0415 18:24:40.015731   10384 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 18:24:40.015731   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:42.404944   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:42.405443   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:42.405443   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:45.210326   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:45.210813   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:45.216335   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:45.216939   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:45.216939   10384 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.63.147"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 18:24:45.394927   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.63.147
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 18:24:45.395706   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:47.711900   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:47.712499   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:47.712595   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:50.491344   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:50.491344   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:50.502173   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:50.502173   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:50.502173   10384 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 18:24:52.836243   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0415 18:24:52.836243   10384 machine.go:97] duration metric: took 49.6332282s to provisionDockerMachine
	I0415 18:24:52.836243   10384 client.go:171] duration metric: took 2m5.2936865s to LocalClient.Create
	I0415 18:24:52.836243   10384 start.go:167] duration metric: took 2m5.2936865s to libmachine.API.Create "ha-653100"
	I0415 18:24:52.836243   10384 start.go:293] postStartSetup for "ha-653100-m02" (driver="hyperv")
	I0415 18:24:52.836243   10384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 18:24:52.850899   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 18:24:52.851896   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:55.199036   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:55.199775   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:55.199775   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:58.012510   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:58.013353   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:58.013914   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:24:58.132196   10384 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.2802026s)
	I0415 18:24:58.147452   10384 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 18:24:58.154532   10384 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 18:24:58.154532   10384 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0415 18:24:58.155095   10384 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0415 18:24:58.156186   10384 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> 112722.pem in /etc/ssl/certs
	I0415 18:24:58.156186   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /etc/ssl/certs/112722.pem
	I0415 18:24:58.170256   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 18:24:58.189873   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /etc/ssl/certs/112722.pem (1708 bytes)
	I0415 18:24:58.243032   10384 start.go:296] duration metric: took 5.4067454s for postStartSetup
	I0415 18:24:58.246437   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:00.550399   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:00.550399   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:00.550894   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:03.289044   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:03.289044   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:03.289835   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:25:03.292186   10384 start.go:128] duration metric: took 2m15.7618211s to createHost
	I0415 18:25:03.292186   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:05.668753   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:05.668753   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:05.668966   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:08.439658   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:08.439658   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:08.447000   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:25:08.447864   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:25:08.447864   10384 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0415 18:25:08.589758   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713205508.597287833
	
	I0415 18:25:08.589758   10384 fix.go:216] guest clock: 1713205508.597287833
	I0415 18:25:08.589758   10384 fix.go:229] Guest: 2024-04-15 18:25:08.597287833 +0000 UTC Remote: 2024-04-15 18:25:03.2921862 +0000 UTC m=+360.055147501 (delta=5.305101633s)
	I0415 18:25:08.590328   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:10.915118   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:10.915118   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:10.916067   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:13.650013   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:13.650612   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:13.656497   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:25:13.657104   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:25:13.657182   10384 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713205508
	I0415 18:25:13.813133   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 15 18:25:08 UTC 2024
	
	I0415 18:25:13.813133   10384 fix.go:236] clock set: Mon Apr 15 18:25:08 UTC 2024
	 (err=<nil>)
	I0415 18:25:13.813133   10384 start.go:83] releasing machines lock for "ha-653100-m02", held for 2m26.2829576s
	I0415 18:25:13.813133   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:16.141194   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:16.141380   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:16.141380   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:18.957495   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:18.957495   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:18.960756   10384 out.go:177] * Found network options:
	I0415 18:25:18.964431   10384 out.go:177]   - NO_PROXY=172.19.63.147
	W0415 18:25:18.966627   10384 proxy.go:119] fail to check proxy env: Error ip not in block
	I0415 18:25:18.969406   10384 out.go:177]   - NO_PROXY=172.19.63.147
	W0415 18:25:18.972226   10384 proxy.go:119] fail to check proxy env: Error ip not in block
	W0415 18:25:18.975235   10384 proxy.go:119] fail to check proxy env: Error ip not in block
	I0415 18:25:18.977840   10384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 18:25:18.977840   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:18.990793   10384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0415 18:25:18.990793   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:21.355429   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:21.355429   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:21.355429   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:21.374654   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:21.374654   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:21.374654   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:24.278775   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:24.279572   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:24.280405   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:25:24.306668   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:24.306668   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:24.308123   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:25:24.386474   10384 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.3956377s)
	W0415 18:25:24.386474   10384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 18:25:24.404866   10384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 18:25:24.481327   10384 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 18:25:24.481327   10384 start.go:494] detecting cgroup driver to use...
	I0415 18:25:24.481327   10384 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.5034427s)
	I0415 18:25:24.481327   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:25:24.536359   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0415 18:25:24.572347   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 18:25:24.593352   10384 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 18:25:24.610729   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 18:25:24.650456   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:25:24.693297   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 18:25:24.730594   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:25:24.771078   10384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 18:25:24.812358   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 18:25:24.854948   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 18:25:24.893956   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 18:25:24.934484   10384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 18:25:24.974849   10384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 18:25:25.012928   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:25:25.269094   10384 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 18:25:25.319374   10384 start.go:494] detecting cgroup driver to use...
	I0415 18:25:25.334757   10384 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0415 18:25:25.382030   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:25:25.422509   10384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 18:25:25.496212   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:25:25.539556   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:25:25.586254   10384 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0415 18:25:25.665807   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:25:25.697619   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:25:25.754485   10384 ssh_runner.go:195] Run: which cri-dockerd
	I0415 18:25:25.776463   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0415 18:25:25.798310   10384 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0415 18:25:25.849027   10384 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0415 18:25:26.103040   10384 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0415 18:25:26.311089   10384 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0415 18:25:26.311089   10384 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0415 18:25:26.371946   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:25:26.596000   10384 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 18:26:27.765978   10384 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1694886s)
	I0415 18:26:27.781002   10384 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0415 18:26:27.817233   10384 out.go:177] 
	W0415 18:26:27.820189   10384 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 15 18:24:51 ha-653100-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 15 18:24:51 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:51.175281888Z" level=info msg="Starting up"
	Apr 15 18:24:51 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:51.176817321Z" level=info msg="containerd not running, starting managed containerd"
	Apr 15 18:24:51 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:51.181288215Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.216362257Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243075421Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243180523Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243245725Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243263625Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243358927Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243375528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243544331Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243714535Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243739035Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243751135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243859138Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.244478651Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.247680919Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.247787921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248037026Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248177629Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248295531Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248444935Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248541437Z" level=info msg="metadata content store policy set" policy=shared
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279315587Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279443690Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279651894Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279764797Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279791497Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.280197206Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.280884220Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281341330Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281485733Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281516134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281561035Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281615936Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281641736Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281663737Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281686937Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281709538Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281727638Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281747238Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281777139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281801640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281822540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281844040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281864141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281895342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282030744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282122446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282152747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282178548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282205748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282227849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282250949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282279750Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282310750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282329151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282347551Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282407752Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282432753Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282447653Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282465554Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282584456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282620757Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282637557Z" level=info msg="NRI interface is disabled by configuration."
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.283743481Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.283842283Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.283903984Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.284335093Z" level=info msg="containerd successfully booted in 0.071116s"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.254240790Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.289190582Z" level=info msg="Loading containers: start."
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.609124512Z" level=info msg="Loading containers: done."
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.636265777Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.636518080Z" level=info msg="Daemon has completed initialization"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.840822625Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.841084828Z" level=info msg="API listen on [::]:2376"
	Apr 15 18:24:52 ha-653100-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 15 18:25:26 ha-653100-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.632253775Z" level=info msg="Processing signal 'terminated'"
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.634242462Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.635132157Z" level=info msg="Daemon shutdown complete"
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.635380455Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.635547254Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 15 18:25:27 ha-653100-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 15 18:25:27 ha-653100-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 15 18:25:27 ha-653100-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 15 18:25:27 ha-653100-m02 dockerd[1016]: time="2024-04-15T18:25:27.736568730Z" level=info msg="Starting up"
	Apr 15 18:26:27 ha-653100-m02 dockerd[1016]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 15 18:26:27 ha-653100-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 15 18:26:27 ha-653100-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 15 18:26:27 ha-653100-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0415 18:26:27.820189   10384 out.go:239] * 
	W0415 18:26:27.821891   10384 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 18:26:27.843940   10384 out.go:177] 
	
	
	==> Docker <==
	Apr 15 18:22:51 ha-653100 dockerd[1327]: time="2024-04-15T18:22:51.405249640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:22:51 ha-653100 dockerd[1327]: time="2024-04-15T18:22:51.407402533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:22:51 ha-653100 dockerd[1327]: time="2024-04-15T18:22:51.494969039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 15 18:22:51 ha-653100 dockerd[1327]: time="2024-04-15T18:22:51.495221539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 18:22:51 ha-653100 dockerd[1327]: time="2024-04-15T18:22:51.495372438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:22:51 ha-653100 dockerd[1327]: time="2024-04-15T18:22:51.495812537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:26:50 ha-653100 dockerd[1321]: 2024/04/15 18:26:50 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:26:50 ha-653100 dockerd[1321]: 2024/04/15 18:26:50 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:26:50 ha-653100 dockerd[1321]: 2024/04/15 18:26:50 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:26:50 ha-653100 dockerd[1321]: 2024/04/15 18:26:50 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:26:50 ha-653100 dockerd[1321]: 2024/04/15 18:26:50 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:26:50 ha-653100 dockerd[1321]: 2024/04/15 18:26:50 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:26:50 ha-653100 dockerd[1321]: 2024/04/15 18:26:50 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:26:50 ha-653100 dockerd[1321]: 2024/04/15 18:26:50 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:26:50 ha-653100 dockerd[1321]: 2024/04/15 18:26:50 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:27:06 ha-653100 dockerd[1327]: time="2024-04-15T18:27:06.406838301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 15 18:27:06 ha-653100 dockerd[1327]: time="2024-04-15T18:27:06.407303300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 18:27:06 ha-653100 dockerd[1327]: time="2024-04-15T18:27:06.407328800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:27:06 ha-653100 dockerd[1327]: time="2024-04-15T18:27:06.407457500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:27:06 ha-653100 cri-dockerd[1226]: time="2024-04-15T18:27:06Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4ba88ccaba1a512a72acfefb5864241c5bdcf769724a94eb9e19d7eb09298ffa/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 15 18:27:07 ha-653100 cri-dockerd[1226]: time="2024-04-15T18:27:07Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 15 18:27:08 ha-653100 dockerd[1327]: time="2024-04-15T18:27:08.008356931Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 15 18:27:08 ha-653100 dockerd[1327]: time="2024-04-15T18:27:08.012080171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 18:27:08 ha-653100 dockerd[1327]: time="2024-04-15T18:27:08.012126471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:27:08 ha-653100 dockerd[1327]: time="2024-04-15T18:27:08.012457375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3810def19c30b       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   4ba88ccaba1a5       busybox-7fdf7869d9-5w5x4
	58d38dcc399d7       cbb01a7bd410d                                                                                         16 minutes ago      Running             coredns                   0                   66b040582e9fe       coredns-76f75df574-hz5n2
	7f2e95849717e       6e38f40d628db                                                                                         16 minutes ago      Running             storage-provisioner       0                   41946a72e3913       storage-provisioner
	79df4cc493ccd       cbb01a7bd410d                                                                                         16 minutes ago      Running             coredns                   0                   c2bc3be2dada4       coredns-76f75df574-sw766
	8533539a42fc8       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              16 minutes ago      Running             kindnet-cni               0                   840d4c720c681       kindnet-k8jt8
	ece5eb28b20be       a1d263b5dc5b0                                                                                         16 minutes ago      Running             kube-proxy                0                   590527a253a30       kube-proxy-dgh6m
	0cf5b602fc0c4       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     17 minutes ago      Running             kube-vip                  0                   71c70584ee9c6       kube-vip-ha-653100
	a0697c56404b8       6052a25da3f97                                                                                         17 minutes ago      Running             kube-controller-manager   0                   5c4190df9fb18       kube-controller-manager-ha-653100
	d68da55f0f382       8c390d98f50c0                                                                                         17 minutes ago      Running             kube-scheduler            0                   92e96b6d41bb2       kube-scheduler-ha-653100
	b7958fc0d30b8       39f995c9f1996                                                                                         17 minutes ago      Running             kube-apiserver            0                   a7b3e44514ced       kube-apiserver-ha-653100
	a0fa6c17de399       3861cfcd7c04c                                                                                         17 minutes ago      Running             etcd                      0                   65fe5df3a93dd       etcd-ha-653100
	
	
	==> coredns [58d38dcc399d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e2b9de1191510a72356755223f06623b152d8cdd72ea393cca47fb3d34a5414574050e77e521fd64fc84b7e18fcd0fb5ead79ecf0a5a8be221bd0ffeb8c0080c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45845 - 8967 "HINFO IN 8354542665525626293.2689365418710486320. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.045113649s
	[INFO] 10.244.0.4:51221 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.078581836s
	[INFO] 10.244.0.4:47875 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.52769764s
	[INFO] 10.244.0.4:52717 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000306601s
	[INFO] 10.244.0.4:39163 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.050987688s
	[INFO] 10.244.0.4:37816 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001543s
	[INFO] 10.244.0.4:60144 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.014447825s
	[INFO] 10.244.0.4:55552 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000204001s
	[INFO] 10.244.0.4:36177 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153901s
	[INFO] 10.244.0.4:46410 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000283001s
	[INFO] 10.244.0.4:57190 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168701s
	
	
	==> coredns [79df4cc493cc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e2b9de1191510a72356755223f06623b152d8cdd72ea393cca47fb3d34a5414574050e77e521fd64fc84b7e18fcd0fb5ead79ecf0a5a8be221bd0ffeb8c0080c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57426 - 12156 "HINFO IN 2507889984284766848.6813386495577107890. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.33687907s
	[INFO] 10.244.0.4:40226 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000337201s
	[INFO] 10.244.0.4:56672 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.049146285s
	[INFO] 10.244.0.4:54337 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001723s
	[INFO] 10.244.0.4:58976 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0002015s
	[INFO] 10.244.0.4:41149 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00024s
	[INFO] 10.244.0.4:37438 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000310601s
	
	
	==> describe nodes <==
	Name:               ha-653100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-653100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c
	                    minikube.k8s.io/name=ha-653100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_15T18_22_25_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 18:22:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-653100
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Apr 2024 18:39:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Apr 2024 18:37:44 +0000   Mon, 15 Apr 2024 18:22:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Apr 2024 18:37:44 +0000   Mon, 15 Apr 2024 18:22:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Apr 2024 18:37:44 +0000   Mon, 15 Apr 2024 18:22:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Apr 2024 18:37:44 +0000   Mon, 15 Apr 2024 18:22:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.63.147
	  Hostname:    ha-653100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 d7ba8367096d4bf9b0e4541361a84287
	  System UUID:                64d5f641-1f2f-ce46-8918-a08d661c1258
	  Boot ID:                    994d41df-0ae9-4f39-ad28-f5e794182c63
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-5w5x4             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-76f75df574-hz5n2             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-76f75df574-sw766             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-653100                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-k8jt8                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-653100             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-ha-653100    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-dgh6m                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-653100             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-vip-ha-653100                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 17m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17m   kubelet          Node ha-653100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m   kubelet          Node ha-653100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m   kubelet          Node ha-653100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16m   node-controller  Node ha-653100 event: Registered Node ha-653100 in Controller
	  Normal  NodeReady                16m   kubelet          Node ha-653100 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.084698] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.376265] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr15 18:21] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.209937] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[ +33.615481] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.104388] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.615924] systemd-fstab-generator[983]: Ignoring "noauto" option for root device
	[  +0.216331] systemd-fstab-generator[995]: Ignoring "noauto" option for root device
	[  +0.260985] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[  +2.876807] systemd-fstab-generator[1179]: Ignoring "noauto" option for root device
	[  +0.212935] systemd-fstab-generator[1191]: Ignoring "noauto" option for root device
	[  +0.227831] systemd-fstab-generator[1203]: Ignoring "noauto" option for root device
	[  +0.311128] systemd-fstab-generator[1218]: Ignoring "noauto" option for root device
	[Apr15 18:22] systemd-fstab-generator[1312]: Ignoring "noauto" option for root device
	[  +0.114802] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.164512] systemd-fstab-generator[1517]: Ignoring "noauto" option for root device
	[  +7.677617] systemd-fstab-generator[1722]: Ignoring "noauto" option for root device
	[  +0.108322] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.774902] kauditd_printk_skb: 67 callbacks suppressed
	[  +5.244487] systemd-fstab-generator[2220]: Ignoring "noauto" option for root device
	[ +14.155639] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.279744] kauditd_printk_skb: 29 callbacks suppressed
	[Apr15 18:27] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [a0fa6c17de39] <==
	{"level":"info","ts":"2024-04-15T18:22:16.371947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"87419fc5adebc62c became leader at term 2"}
	{"level":"info","ts":"2024-04-15T18:22:16.371957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 87419fc5adebc62c elected leader 87419fc5adebc62c at term 2"}
	{"level":"info","ts":"2024-04-15T18:22:16.379846Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"87419fc5adebc62c","local-member-attributes":"{Name:ha-653100 ClientURLs:[https://172.19.63.147:2379]}","request-path":"/0/members/87419fc5adebc62c/attributes","cluster-id":"877b68dea54e79ed","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-15T18:22:16.380125Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T18:22:16.381521Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-15T18:22:16.38789Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-15T18:22:16.387913Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-15T18:22:16.388265Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-15T18:22:16.392534Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"877b68dea54e79ed","local-member-id":"87419fc5adebc62c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T18:22:16.392806Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T18:22:16.393093Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T18:22:16.397755Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-15T18:22:16.407773Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.63.147:2379"}
	{"level":"warn","ts":"2024-04-15T18:22:40.147381Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.732834ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:434"}
	{"level":"info","ts":"2024-04-15T18:22:40.147471Z","caller":"traceutil/trace.go:171","msg":"trace[1597021408] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:383; }","duration":"127.883033ms","start":"2024-04-15T18:22:40.019571Z","end":"2024-04-15T18:22:40.147455Z","steps":["trace[1597021408] 'range keys from in-memory index tree'  (duration: 127.546236ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:22:59.717501Z","caller":"traceutil/trace.go:171","msg":"trace[1969279315] transaction","detail":"{read_only:false; response_revision:463; number_of_response:1; }","duration":"139.509436ms","start":"2024-04-15T18:22:59.57797Z","end":"2024-04-15T18:22:59.71748Z","steps":["trace[1969279315] 'process raft request'  (duration: 139.406534ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:23:00.52288Z","caller":"traceutil/trace.go:171","msg":"trace[385091820] transaction","detail":"{read_only:false; response_revision:465; number_of_response:1; }","duration":"121.870177ms","start":"2024-04-15T18:23:00.400991Z","end":"2024-04-15T18:23:00.522861Z","steps":["trace[385091820] 'process raft request'  (duration: 121.627673ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:32:17.389849Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":950}
	{"level":"info","ts":"2024-04-15T18:32:17.454759Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":950,"took":"64.64366ms","hash":2582305163,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2441216,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-04-15T18:32:17.454923Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2582305163,"revision":950,"compact-revision":-1}
	{"level":"warn","ts":"2024-04-15T18:36:03.263945Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.580472ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1110"}
	{"level":"info","ts":"2024-04-15T18:36:03.26468Z","caller":"traceutil/trace.go:171","msg":"trace[1323778589] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1888; }","duration":"117.331073ms","start":"2024-04-15T18:36:03.147304Z","end":"2024-04-15T18:36:03.264635Z","steps":["trace[1323778589] 'range keys from in-memory index tree'  (duration: 116.483171ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:37:17.412681Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1485}
	{"level":"info","ts":"2024-04-15T18:37:17.429126Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1485,"took":"15.768632ms","hash":3832951504,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1802240,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-04-15T18:37:17.429231Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3832951504,"revision":1485,"compact-revision":950}
	
	
	==> kernel <==
	 18:39:29 up 19 min,  0 users,  load average: 0.25, 0.30, 0.27
	Linux ha-653100 5.10.207 #1 SMP Mon Apr 15 15:01:07 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8533539a42fc] <==
	I0415 18:37:27.206621       1 main.go:227] handling current node
	I0415 18:37:37.212887       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:37:37.212996       1 main.go:227] handling current node
	I0415 18:37:47.219034       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:37:47.219137       1 main.go:227] handling current node
	I0415 18:37:57.230347       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:37:57.230455       1 main.go:227] handling current node
	I0415 18:38:07.236540       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:38:07.236728       1 main.go:227] handling current node
	I0415 18:38:17.244378       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:38:17.244421       1 main.go:227] handling current node
	I0415 18:38:27.250358       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:38:27.250404       1 main.go:227] handling current node
	I0415 18:38:37.259708       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:38:37.259754       1 main.go:227] handling current node
	I0415 18:38:47.266662       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:38:47.266803       1 main.go:227] handling current node
	I0415 18:38:57.278879       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:38:57.278993       1 main.go:227] handling current node
	I0415 18:39:07.293138       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:39:07.293418       1 main.go:227] handling current node
	I0415 18:39:17.310608       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:39:17.310658       1 main.go:227] handling current node
	I0415 18:39:27.316910       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:39:27.317049       1 main.go:227] handling current node
	
	
	==> kube-apiserver [b7958fc0d30b] <==
	I0415 18:22:19.461381       1 controller.go:624] quota admission added evaluator for: namespaces
	I0415 18:22:19.468850       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0415 18:22:19.469135       1 shared_informer.go:318] Caches are synced for configmaps
	I0415 18:22:19.471941       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0415 18:22:19.472253       1 aggregator.go:165] initial CRD sync complete...
	I0415 18:22:19.472339       1 autoregister_controller.go:141] Starting autoregister controller
	I0415 18:22:19.472452       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0415 18:22:19.472462       1 cache.go:39] Caches are synced for autoregister controller
	I0415 18:22:19.498049       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0415 18:22:19.510348       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0415 18:22:20.354035       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0415 18:22:20.363724       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0415 18:22:20.363838       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0415 18:22:21.763949       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0415 18:22:21.866542       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0415 18:22:22.100224       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0415 18:22:22.118571       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.63.147]
	I0415 18:22:22.120605       1 controller.go:624] quota admission added evaluator for: endpoints
	I0415 18:22:22.130952       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0415 18:22:22.385516       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0415 18:22:24.016138       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0415 18:22:24.048032       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0415 18:22:24.081226       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0415 18:22:36.868875       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0415 18:22:36.898745       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [a0697c56404b] <==
	I0415 18:22:37.265766       1 shared_informer.go:318] Caches are synced for garbage collector
	I0415 18:22:37.284118       1 shared_informer.go:318] Caches are synced for garbage collector
	I0415 18:22:37.284549       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0415 18:22:37.317722       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="72.247257ms"
	I0415 18:22:37.317794       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="39.6µs"
	I0415 18:22:49.963920       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="429.397µs"
	I0415 18:22:49.975545       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="117.899µs"
	I0415 18:22:50.009242       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="326.398µs"
	I0415 18:22:50.048064       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="90.899µs"
	I0415 18:22:51.764868       1 node_lifecycle_controller.go:1045] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0415 18:22:52.188891       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="82.302µs"
	I0415 18:22:52.287692       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="37.246165ms"
	I0415 18:22:52.288055       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="224.505µs"
	I0415 18:22:52.333123       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="22.028652ms"
	I0415 18:22:52.333675       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="426.909µs"
	I0415 18:27:05.738408       1 event.go:376] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7fdf7869d9 to 3"
	I0415 18:27:05.789870       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-5w5x4"
	I0415 18:27:05.841032       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-8pgjv"
	I0415 18:27:05.849328       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-tk6sh"
	I0415 18:27:05.899441       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="160.526833ms"
	I0415 18:27:05.957239       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="57.716604ms"
	I0415 18:27:05.998341       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="40.497733ms"
	I0415 18:27:05.998579       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="95.3µs"
	I0415 18:27:09.211983       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="14.949061ms"
	I0415 18:27:09.212464       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="29.3µs"
	
	
	==> kube-proxy [ece5eb28b20b] <==
	I0415 18:22:38.391716       1 server_others.go:72] "Using iptables proxy"
	I0415 18:22:38.407680       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.63.147"]
	I0415 18:22:38.495319       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0415 18:22:38.495346       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0415 18:22:38.495361       1 server_others.go:168] "Using iptables Proxier"
	I0415 18:22:38.500785       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0415 18:22:38.501443       1 server.go:865] "Version info" version="v1.29.3"
	I0415 18:22:38.501468       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0415 18:22:38.503945       1 config.go:188] "Starting service config controller"
	I0415 18:22:38.504041       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0415 18:22:38.504268       1 config.go:97] "Starting endpoint slice config controller"
	I0415 18:22:38.504770       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0415 18:22:38.505829       1 config.go:315] "Starting node config controller"
	I0415 18:22:38.507970       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0415 18:22:38.605316       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0415 18:22:38.605583       1 shared_informer.go:318] Caches are synced for service config
	I0415 18:22:38.608238       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [d68da55f0f38] <==
	W0415 18:22:20.533571       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0415 18:22:20.533671       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0415 18:22:20.559089       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0415 18:22:20.559148       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0415 18:22:20.566941       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0415 18:22:20.569271       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0415 18:22:20.649432       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0415 18:22:20.649545       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0415 18:22:20.680518       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0415 18:22:20.681133       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0415 18:22:20.703015       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0415 18:22:20.703474       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0415 18:22:20.766338       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0415 18:22:20.766458       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0415 18:22:20.789649       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0415 18:22:20.790593       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0415 18:22:20.803334       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0415 18:22:20.804054       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0415 18:22:20.808728       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0415 18:22:20.809130       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0415 18:22:20.838937       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0415 18:22:20.841219       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0415 18:22:20.865287       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0415 18:22:20.865345       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0415 18:22:22.187395       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 15 18:35:24 ha-653100 kubelet[2226]: E0415 18:35:24.245008    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:35:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:35:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:35:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:35:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 18:36:24 ha-653100 kubelet[2226]: E0415 18:36:24.244147    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:36:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:36:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:36:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:36:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 18:37:24 ha-653100 kubelet[2226]: E0415 18:37:24.244721    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:37:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:37:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:37:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:37:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 18:38:24 ha-653100 kubelet[2226]: E0415 18:38:24.244191    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:38:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:38:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:38:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:38:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 18:39:24 ha-653100 kubelet[2226]: E0415 18:39:24.244233    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:39:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:39:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:39:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:39:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [7f2e95849717] <==
	I0415 18:22:51.745766       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0415 18:22:51.775039       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0415 18:22:51.776486       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0415 18:22:51.796625       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0415 18:22:51.797264       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-653100_25d3e2ad-9ea0-4e78-8d19-2cecacd07855!
	I0415 18:22:51.798087       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6b2abca4-b232-44be-91ab-d881b60cfa0a", APIVersion:"v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-653100_25d3e2ad-9ea0-4e78-8d19-2cecacd07855 became leader
	I0415 18:22:51.899439       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-653100_25d3e2ad-9ea0-4e78-8d19-2cecacd07855!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:39:20.986304    2620 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-653100 -n ha-653100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-653100 -n ha-653100: (13.1700136s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-653100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7fdf7869d9-8pgjv busybox-7fdf7869d9-tk6sh
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeployApp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-653100 describe pod busybox-7fdf7869d9-8pgjv busybox-7fdf7869d9-tk6sh
helpers_test.go:282: (dbg) kubectl --context ha-653100 describe pod busybox-7fdf7869d9-8pgjv busybox-7fdf7869d9-tk6sh:

                                                
                                                
-- stdout --
	Name:             busybox-7fdf7869d9-8pgjv
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7fdf7869d9
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7fdf7869d9
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c4hn5 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-c4hn5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m21s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	
	
	Name:             busybox-7fdf7869d9-tk6sh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7fdf7869d9
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7fdf7869d9
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rjshx (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-rjshx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m21s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeployApp (759.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (49.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-653100 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-653100 -- exec busybox-7fdf7869d9-5w5x4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-653100 -- exec busybox-7fdf7869d9-5w5x4 -- sh -c "ping -c 1 172.19.48.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-653100 -- exec busybox-7fdf7869d9-5w5x4 -- sh -c "ping -c 1 172.19.48.1": exit status 1 (10.5431135s)

                                                
                                                
-- stdout --
	PING 172.19.48.1 (172.19.48.1): 56 data bytes
	
	--- 172.19.48.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:39:45.593206    8436 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.19.48.1) from pod (busybox-7fdf7869d9-5w5x4): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-653100 -- exec busybox-7fdf7869d9-8pgjv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:207: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-653100 -- exec busybox-7fdf7869d9-8pgjv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (438.3826ms)

                                                
                                                
** stderr ** 
	W0415 18:39:56.143058    8604 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-7fdf7869d9-8pgjv does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:209: Pod busybox-7fdf7869d9-8pgjv could not resolve 'host.minikube.internal': exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-653100 -- exec busybox-7fdf7869d9-tk6sh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:207: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-653100 -- exec busybox-7fdf7869d9-tk6sh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (441.0467ms)

                                                
                                                
** stderr ** 
	W0415 18:39:56.568878   10620 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-7fdf7869d9-tk6sh does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:209: Pod busybox-7fdf7869d9-tk6sh could not resolve 'host.minikube.internal': exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-653100 -n ha-653100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-653100 -n ha-653100: (12.9979546s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-653100 logs -n 25
E0415 18:40:10.519007   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-653100 logs -n 25: (9.3649116s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:38 UTC | 15 Apr 24 18:38 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:38 UTC | 15 Apr 24 18:38 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | busybox-7fdf7869d9-5w5x4 --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-8pgjv --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-tk6sh --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | busybox-7fdf7869d9-5w5x4 --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-8pgjv --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-tk6sh --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | busybox-7fdf7869d9-5w5x4 -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-8pgjv -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-tk6sh -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | busybox-7fdf7869d9-5w5x4             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-5w5x4 -- sh       |           |                   |                |                     |                     |
	|         | -c ping -c 1 172.19.48.1             |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-8pgjv             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-tk6sh             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 18:19:03
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 18:19:03.428900   10384 out.go:291] Setting OutFile to fd 956 ...
	I0415 18:19:03.429535   10384 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:19:03.429535   10384 out.go:304] Setting ErrFile to fd 892...
	I0415 18:19:03.429535   10384 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:19:03.456152   10384 out.go:298] Setting JSON to false
	I0415 18:19:03.460969   10384 start.go:129] hostinfo: {"hostname":"minikube6","uptime":16870,"bootTime":1713188273,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0415 18:19:03.460969   10384 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 18:19:03.468944   10384 out.go:177] * [ha-653100] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0415 18:19:03.471713   10384 notify.go:220] Checking for updates...
	I0415 18:19:03.474175   10384 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 18:19:03.479852   10384 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 18:19:03.482821   10384 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0415 18:19:03.485193   10384 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 18:19:03.488098   10384 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 18:19:03.491472   10384 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 18:19:09.177227   10384 out.go:177] * Using the hyperv driver based on user configuration
	I0415 18:19:09.180711   10384 start.go:297] selected driver: hyperv
	I0415 18:19:09.180711   10384 start.go:901] validating driver "hyperv" against <nil>
	I0415 18:19:09.180711   10384 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 18:19:09.231415   10384 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 18:19:09.233116   10384 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 18:19:09.233296   10384 cni.go:84] Creating CNI manager for ""
	I0415 18:19:09.233296   10384 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0415 18:19:09.233296   10384 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0415 18:19:09.233503   10384 start.go:340] cluster config:
	{Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-653100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 18:19:09.233896   10384 iso.go:125] acquiring lock: {Name:mkb11aac800c033551a31c7a773c0461f92e4459 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 18:19:09.237716   10384 out.go:177] * Starting "ha-653100" primary control-plane node in "ha-653100" cluster
	I0415 18:19:09.241624   10384 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:19:09.241887   10384 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 18:19:09.241939   10384 cache.go:56] Caching tarball of preloaded images
	I0415 18:19:09.242318   10384 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 18:19:09.242373   10384 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 18:19:09.243280   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:19:09.243280   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json: {Name:mk9fcf3e86096a1c3d878c2c5f55d5a5acd00e38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:19:09.244971   10384 start.go:360] acquireMachinesLock for ha-653100: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 18:19:09.244971   10384 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-653100"
	I0415 18:19:09.244971   10384 start.go:93] Provisioning new machine with config: &{Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-653100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 18:19:09.244971   10384 start.go:125] createHost starting for "" (driver="hyperv")
	I0415 18:19:09.247899   10384 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 18:19:09.247899   10384 start.go:159] libmachine.API.Create for "ha-653100" (driver="hyperv")
	I0415 18:19:09.247899   10384 client.go:168] LocalClient.Create starting
	I0415 18:19:09.248830   10384 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0415 18:19:09.249101   10384 main.go:141] libmachine: Decoding PEM data...
	I0415 18:19:09.249148   10384 main.go:141] libmachine: Parsing certificate...
	I0415 18:19:09.249148   10384 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0415 18:19:09.249148   10384 main.go:141] libmachine: Decoding PEM data...
	I0415 18:19:09.249148   10384 main.go:141] libmachine: Parsing certificate...
	I0415 18:19:09.249731   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0415 18:19:11.419777   10384 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0415 18:19:11.419777   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:11.420812   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0415 18:19:13.280108   10384 main.go:141] libmachine: [stdout =====>] : False
	
	I0415 18:19:13.280108   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:13.280637   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 18:19:14.855241   10384 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 18:19:14.855241   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:14.855241   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 18:19:18.733923   10384 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 18:19:18.734210   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:18.736243   10384 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0415 18:19:19.289879   10384 main.go:141] libmachine: Creating SSH key...
	I0415 18:19:19.400622   10384 main.go:141] libmachine: Creating VM...
	I0415 18:19:19.400622   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 18:19:22.473592   10384 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 18:19:22.473592   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:22.473592   10384 main.go:141] libmachine: Using switch "Default Switch"
	I0415 18:19:22.473592   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 18:19:24.358372   10384 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 18:19:24.358573   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:24.358573   10384 main.go:141] libmachine: Creating VHD
	I0415 18:19:24.358573   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\fixed.vhd' -SizeBytes 10MB -Fixed
	I0415 18:19:28.369440   10384 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 650E0F4D-34EC-4EE4-B011-F395B7FC2B3C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0415 18:19:28.369525   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:28.369525   10384 main.go:141] libmachine: Writing magic tar header
	I0415 18:19:28.369609   10384 main.go:141] libmachine: Writing SSH key tar header
	I0415 18:19:28.379115   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\disk.vhd' -VHDType Dynamic -DeleteSource
	I0415 18:19:31.701668   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:31.701668   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:31.702065   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\disk.vhd' -SizeBytes 20000MB
	I0415 18:19:34.409230   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:34.409287   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:34.409287   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-653100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0415 18:19:38.391213   10384 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-653100 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0415 18:19:38.391365   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:38.391448   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-653100 -DynamicMemoryEnabled $false
	I0415 18:19:40.850920   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:40.851446   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:40.851446   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-653100 -Count 2
	I0415 18:19:43.184748   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:43.185230   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:43.185314   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-653100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\boot2docker.iso'
	I0415 18:19:45.947867   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:45.947867   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:45.948906   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-653100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\disk.vhd'
	I0415 18:19:48.807697   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:48.807697   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:48.807697   10384 main.go:141] libmachine: Starting VM...
	I0415 18:19:48.808056   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-653100
	I0415 18:19:52.116173   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:52.117205   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:52.117205   10384 main.go:141] libmachine: Waiting for host to start...
	I0415 18:19:52.117276   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:19:54.557809   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:19:54.558376   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:54.558452   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:19:57.250722   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:57.250722   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:58.258291   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:00.584210   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:00.584210   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:00.584448   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:03.246620   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:20:03.247582   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:04.255962   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:06.600399   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:06.600459   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:06.600459   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:09.316612   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:20:09.316612   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:10.317022   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:12.741666   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:12.741972   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:12.742046   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:15.418020   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:20:15.418020   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:16.427460   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:18.790469   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:18.790783   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:18.790783   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:21.596566   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:21.596566   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:21.597345   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:23.951579   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:23.951579   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:23.951579   10384 machine.go:94] provisionDockerMachine start ...
	I0415 18:20:23.952606   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:26.247912   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:26.248135   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:26.248135   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:29.012297   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:29.012297   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:29.019039   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:20:29.032591   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:20:29.032673   10384 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 18:20:29.165965   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0415 18:20:29.165965   10384 buildroot.go:166] provisioning hostname "ha-653100"
	I0415 18:20:29.165965   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:31.462885   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:31.462885   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:31.462973   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:34.155427   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:34.156301   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:34.162944   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:20:34.163526   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:20:34.163526   10384 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-653100 && echo "ha-653100" | sudo tee /etc/hostname
	I0415 18:20:34.337418   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-653100
	
	I0415 18:20:34.337418   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:36.655518   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:36.655518   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:36.655812   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:39.380784   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:39.380784   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:39.389453   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:20:39.390401   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:20:39.390401   10384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-653100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-653100/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-653100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 18:20:39.543028   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 18:20:39.543028   10384 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0415 18:20:39.543028   10384 buildroot.go:174] setting up certificates
	I0415 18:20:39.543028   10384 provision.go:84] configureAuth start
	I0415 18:20:39.543611   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:41.851405   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:41.851611   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:41.851695   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:44.624640   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:44.624640   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:44.624640   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:46.878650   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:46.878650   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:46.879166   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:49.633681   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:49.633926   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:49.633926   10384 provision.go:143] copyHostCerts
	I0415 18:20:49.633926   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0415 18:20:49.634462   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0415 18:20:49.634462   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0415 18:20:49.635297   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0415 18:20:49.637549   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0415 18:20:49.637813   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0415 18:20:49.637813   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0415 18:20:49.637813   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0415 18:20:49.639233   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0415 18:20:49.639233   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0415 18:20:49.639233   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0415 18:20:49.639935   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0415 18:20:49.640957   10384 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-653100 san=[127.0.0.1 172.19.63.147 ha-653100 localhost minikube]
	I0415 18:20:49.905880   10384 provision.go:177] copyRemoteCerts
	I0415 18:20:49.922553   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 18:20:49.922553   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:52.259882   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:52.259882   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:52.259882   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:54.984473   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:54.984473   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:54.984987   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:20:55.101879   10384 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1791462s)
	I0415 18:20:55.101879   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0415 18:20:55.102059   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 18:20:55.153442   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0415 18:20:55.153917   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0415 18:20:55.199876   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0415 18:20:55.200448   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0415 18:20:55.254511   10384 provision.go:87] duration metric: took 15.7112643s to configureAuth
	I0415 18:20:55.254511   10384 buildroot.go:189] setting minikube options for container-runtime
	I0415 18:20:55.255352   10384 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:20:55.255474   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:57.547699   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:57.547699   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:57.547786   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:00.303241   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:00.303241   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:00.309852   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:00.310680   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:00.310680   10384 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 18:21:00.455641   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0415 18:21:00.455641   10384 buildroot.go:70] root file system type: tmpfs
	I0415 18:21:00.455641   10384 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 18:21:00.455641   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:02.740065   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:02.740065   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:02.740841   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:05.487209   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:05.487209   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:05.492437   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:05.493558   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:05.493558   10384 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 18:21:05.663243   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 18:21:05.663359   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:07.945804   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:07.946031   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:07.946031   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:10.668442   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:10.668442   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:10.674981   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:10.675100   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:10.675100   10384 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 18:21:12.959357   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0415 18:21:12.959357   10384 machine.go:97] duration metric: took 49.0073804s to provisionDockerMachine
	I0415 18:21:12.959357   10384 client.go:171] duration metric: took 2m3.7104605s to LocalClient.Create
	I0415 18:21:12.959357   10384 start.go:167] duration metric: took 2m3.7104605s to libmachine.API.Create "ha-653100"
	I0415 18:21:12.959357   10384 start.go:293] postStartSetup for "ha-653100" (driver="hyperv")
	I0415 18:21:12.959357   10384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 18:21:12.974666   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 18:21:12.974666   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:15.275980   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:15.275980   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:15.275980   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:18.019740   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:18.019762   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:18.019878   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:21:18.139960   10384 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1652527s)
	I0415 18:21:18.155380   10384 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 18:21:18.164559   10384 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 18:21:18.164559   10384 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0415 18:21:18.165434   10384 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0415 18:21:18.166112   10384 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> 112722.pem in /etc/ssl/certs
	I0415 18:21:18.166112   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /etc/ssl/certs/112722.pem
	I0415 18:21:18.180084   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 18:21:18.200844   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /etc/ssl/certs/112722.pem (1708 bytes)
	I0415 18:21:18.250132   10384 start.go:296] duration metric: took 5.2907331s for postStartSetup
	I0415 18:21:18.253937   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:20.531894   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:20.531894   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:20.531894   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:23.259067   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:23.259480   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:23.259754   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:21:23.262894   10384 start.go:128] duration metric: took 2m14.0167978s to createHost
	I0415 18:21:23.262950   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:25.573334   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:25.573334   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:25.573334   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:28.294984   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:28.295213   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:28.304032   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:28.304955   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:28.304955   10384 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0415 18:21:28.441121   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713205288.448859419
	
	I0415 18:21:28.441191   10384 fix.go:216] guest clock: 1713205288.448859419
	I0415 18:21:28.441191   10384 fix.go:229] Guest: 2024-04-15 18:21:28.448859419 +0000 UTC Remote: 2024-04-15 18:21:23.2629505 +0000 UTC m=+140.027670501 (delta=5.185908919s)
	I0415 18:21:28.441272   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:30.726887   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:30.727164   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:30.727164   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:33.517730   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:33.518861   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:33.525281   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:33.525856   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:33.525856   10384 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713205288
	I0415 18:21:33.684173   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 15 18:21:28 UTC 2024
	
	I0415 18:21:33.684173   10384 fix.go:236] clock set: Mon Apr 15 18:21:28 UTC 2024
	 (err=<nil>)
	I0415 18:21:33.684173   10384 start.go:83] releasing machines lock for "ha-653100", held for 2m24.4380391s
	I0415 18:21:33.684173   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:35.959004   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:35.959004   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:35.959004   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:38.693038   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:38.693586   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:38.698246   10384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 18:21:38.698432   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:38.709918   10384 ssh_runner.go:195] Run: cat /version.json
	I0415 18:21:38.709918   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:41.102451   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:43.920818   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:43.920972   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:43.921214   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:21:43.967273   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:43.967331   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:43.967331   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:21:44.091517   10384 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3921768s)
	I0415 18:21:44.091595   10384 ssh_runner.go:235] Completed: cat /version.json: (5.3815555s)
	I0415 18:21:44.105965   10384 ssh_runner.go:195] Run: systemctl --version
	I0415 18:21:44.128397   10384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0415 18:21:44.135680   10384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 18:21:44.149066   10384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 18:21:44.177790   10384 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 18:21:44.177790   10384 start.go:494] detecting cgroup driver to use...
	I0415 18:21:44.177790   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:21:44.228163   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0415 18:21:44.262529   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 18:21:44.285370   10384 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 18:21:44.301154   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 18:21:44.336472   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:21:44.370998   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 18:21:44.404889   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:21:44.438672   10384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 18:21:44.473968   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 18:21:44.507568   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 18:21:44.541278   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 18:21:44.574748   10384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 18:21:44.615798   10384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 18:21:44.656765   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:21:44.866329   10384 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 18:21:44.902355   10384 start.go:494] detecting cgroup driver to use...
	I0415 18:21:44.917364   10384 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0415 18:21:44.958576   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:21:44.995083   10384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 18:21:45.045436   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:21:45.084274   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:21:45.126708   10384 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0415 18:21:45.197837   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:21:45.224449   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:21:45.274212   10384 ssh_runner.go:195] Run: which cri-dockerd
	I0415 18:21:45.295670   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0415 18:21:45.317816   10384 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0415 18:21:45.364867   10384 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0415 18:21:45.594504   10384 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0415 18:21:45.794998   10384 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0415 18:21:45.795406   10384 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0415 18:21:45.851288   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:21:46.067106   10384 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 18:21:48.625712   10384 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5574711s)
	I0415 18:21:48.640151   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0415 18:21:48.681058   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 18:21:48.721545   10384 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0415 18:21:48.945328   10384 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0415 18:21:49.172462   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:21:49.400402   10384 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0415 18:21:49.448539   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 18:21:49.489496   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:21:49.703253   10384 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0415 18:21:49.816658   10384 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0415 18:21:49.830904   10384 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0415 18:21:49.840743   10384 start.go:562] Will wait 60s for crictl version
	I0415 18:21:49.855288   10384 ssh_runner.go:195] Run: which crictl
	I0415 18:21:49.875869   10384 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0415 18:21:49.936713   10384 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0415 18:21:49.947981   10384 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 18:21:49.993965   10384 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 18:21:50.032420   10384 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0415 18:21:50.032553   10384 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0415 18:21:50.037021   10384 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0415 18:21:50.037021   10384 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0415 18:21:50.037021   10384 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0415 18:21:50.037021   10384 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:d7:0e Flags:up|broadcast|multicast|running}
	I0415 18:21:50.039971   10384 ip.go:210] interface addr: fe80::6b0:6318:bc6e:fcda/64
	I0415 18:21:50.039971   10384 ip.go:210] interface addr: 172.19.48.1/20
	I0415 18:21:50.056064   10384 ssh_runner.go:195] Run: grep 172.19.48.1	host.minikube.internal$ /etc/hosts
	I0415 18:21:50.062649   10384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 18:21:50.097930   10384 kubeadm.go:877] updating cluster {Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:ha-653100 Namespace:default APIServerHAVIP:172.19.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.63.147 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0415 18:21:50.097930   10384 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:21:50.108473   10384 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 18:21:50.131644   10384 docker.go:685] Got preloaded images: 
	I0415 18:21:50.132600   10384 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0415 18:21:50.146104   10384 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0415 18:21:50.181885   10384 ssh_runner.go:195] Run: which lz4
	I0415 18:21:50.188111   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0415 18:21:50.202072   10384 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0415 18:21:50.209107   10384 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0415 18:21:50.209107   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0415 18:21:52.413614   10384 docker.go:649] duration metric: took 2.2254854s to copy over tarball
	I0415 18:21:52.429279   10384 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0415 18:22:01.379987   10384 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.9504893s)
	I0415 18:22:01.379987   10384 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0415 18:22:01.455511   10384 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0415 18:22:01.477182   10384 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0415 18:22:01.536289   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:22:01.768214   10384 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 18:22:04.398301   10384 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6300657s)
	I0415 18:22:04.408551   10384 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 18:22:04.433417   10384 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0415 18:22:04.433417   10384 cache_images.go:84] Images are preloaded, skipping loading
	I0415 18:22:04.433417   10384 kubeadm.go:928] updating node { 172.19.63.147 8443 v1.29.3 docker true true} ...
	I0415 18:22:04.433417   10384 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-653100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.63.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-653100 Namespace:default APIServerHAVIP:172.19.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0415 18:22:04.444220   10384 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0415 18:22:04.490342   10384 cni.go:84] Creating CNI manager for ""
	I0415 18:22:04.490402   10384 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 18:22:04.490472   10384 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0415 18:22:04.490526   10384 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.63.147 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-653100 NodeName:ha-653100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.63.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.63.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0415 18:22:04.490735   10384 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.63.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-653100"
	  kubeletExtraArgs:
	    node-ip: 172.19.63.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.63.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0415 18:22:04.490884   10384 kube-vip.go:111] generating kube-vip config ...
	I0415 18:22:04.505496   10384 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0415 18:22:04.536495   10384 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0415 18:22:04.536752   10384 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.63.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0415 18:22:04.551207   10384 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0415 18:22:04.567905   10384 binaries.go:44] Found k8s binaries, skipping transfer
	I0415 18:22:04.582348   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0415 18:22:04.604171   10384 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0415 18:22:04.646000   10384 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0415 18:22:04.692832   10384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0415 18:22:04.728604   10384 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1351 bytes)
	I0415 18:22:04.775922   10384 ssh_runner.go:195] Run: grep 172.19.63.254	control-plane.minikube.internal$ /etc/hosts
	I0415 18:22:04.783742   10384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.63.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 18:22:04.822733   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:22:05.055746   10384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 18:22:05.087598   10384 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100 for IP: 172.19.63.147
	I0415 18:22:05.087652   10384 certs.go:194] generating shared ca certs ...
	I0415 18:22:05.087652   10384 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.088303   10384 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0415 18:22:05.088303   10384 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0415 18:22:05.088915   10384 certs.go:256] generating profile certs ...
	I0415 18:22:05.089546   10384 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.key
	I0415 18:22:05.089739   10384 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.crt with IP's: []
	I0415 18:22:05.327013   10384 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.crt ...
	I0415 18:22:05.328010   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.crt: {Name:mka413e653e113856769234a348385e515e46303 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.329372   10384 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.key ...
	I0415 18:22:05.329372   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.key: {Name:mk12a79d6acd7fec5ddd98754bb23ab16e83b4f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.330112   10384 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key.c151ec5c
	I0415 18:22:05.331447   10384 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt.c151ec5c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.63.147 172.19.63.254]
	I0415 18:22:05.565428   10384 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt.c151ec5c ...
	I0415 18:22:05.565428   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt.c151ec5c: {Name:mk5c523ee813d33697660e99fb5da48b385701b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.567434   10384 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key.c151ec5c ...
	I0415 18:22:05.567434   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key.c151ec5c: {Name:mkeadeed87d8879714bf8100a4229bec1246f570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.568511   10384 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt.c151ec5c -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt
	I0415 18:22:05.585425   10384 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key.c151ec5c -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key
	I0415 18:22:05.586963   10384 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key
	I0415 18:22:05.587129   10384 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt with IP's: []
	I0415 18:22:05.748042   10384 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt ...
	I0415 18:22:05.749020   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt: {Name:mk92c7defdccaf790f51e1080d3836b064a3ba9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.749736   10384 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key ...
	I0415 18:22:05.749736   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key: {Name:mk071663552007da34f935841f25d643d746d544 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.751078   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0415 18:22:05.751078   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0415 18:22:05.751078   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0415 18:22:05.751078   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0415 18:22:05.752108   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0415 18:22:05.752265   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0415 18:22:05.752517   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0415 18:22:05.761320   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0415 18:22:05.761625   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem (1338 bytes)
	W0415 18:22:05.762397   10384 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272_empty.pem, impossibly tiny 0 bytes
	I0415 18:22:05.762397   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0415 18:22:05.762397   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0415 18:22:05.763315   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0415 18:22:05.763315   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0415 18:22:05.764136   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem (1708 bytes)
	I0415 18:22:05.764433   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /usr/share/ca-certificates/112722.pem
	I0415 18:22:05.764684   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:22:05.764840   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem -> /usr/share/ca-certificates/11272.pem
	I0415 18:22:05.766228   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0415 18:22:05.818285   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0415 18:22:05.869100   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0415 18:22:05.927943   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0415 18:22:05.982236   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0415 18:22:06.033436   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0415 18:22:06.088918   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0415 18:22:06.140228   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0415 18:22:06.194914   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /usr/share/ca-certificates/112722.pem (1708 bytes)
	I0415 18:22:06.244585   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0415 18:22:06.295695   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem --> /usr/share/ca-certificates/11272.pem (1338 bytes)
	I0415 18:22:06.348962   10384 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0415 18:22:06.398272   10384 ssh_runner.go:195] Run: openssl version
	I0415 18:22:06.422630   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112722.pem && ln -fs /usr/share/ca-certificates/112722.pem /etc/ssl/certs/112722.pem"
	I0415 18:22:06.459842   10384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112722.pem
	I0415 18:22:06.467290   10384 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 17:58 /usr/share/ca-certificates/112722.pem
	I0415 18:22:06.480612   10384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112722.pem
	I0415 18:22:06.503535   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112722.pem /etc/ssl/certs/3ec20f2e.0"
	I0415 18:22:06.538561   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0415 18:22:06.572574   10384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:22:06.580950   10384 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:22:06.595127   10384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:22:06.618634   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0415 18:22:06.655478   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11272.pem && ln -fs /usr/share/ca-certificates/11272.pem /etc/ssl/certs/11272.pem"
	I0415 18:22:06.690402   10384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11272.pem
	I0415 18:22:06.698649   10384 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 17:58 /usr/share/ca-certificates/11272.pem
	I0415 18:22:06.712709   10384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11272.pem
	I0415 18:22:06.735899   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11272.pem /etc/ssl/certs/51391683.0"
	I0415 18:22:06.771243   10384 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 18:22:06.778754   10384 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0415 18:22:06.779215   10384 kubeadm.go:391] StartCluster: {Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clu
sterName:ha-653100 Namespace:default APIServerHAVIP:172.19.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.63.147 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 18:22:06.790653   10384 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0415 18:22:06.830974   10384 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0415 18:22:06.866829   10384 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0415 18:22:06.900593   10384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0415 18:22:06.925579   10384 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0415 18:22:06.925579   10384 kubeadm.go:156] found existing configuration files:
	
	I0415 18:22:06.940209   10384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0415 18:22:06.959148   10384 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0415 18:22:06.975145   10384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0415 18:22:07.014822   10384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0415 18:22:07.031944   10384 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0415 18:22:07.045919   10384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0415 18:22:07.081479   10384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0415 18:22:07.104063   10384 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0415 18:22:07.117753   10384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0415 18:22:07.151118   10384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0415 18:22:07.171678   10384 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0415 18:22:07.187200   10384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0415 18:22:07.206408   10384 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0415 18:22:07.712971   10384 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0415 18:22:24.172226   10384 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0415 18:22:24.172397   10384 kubeadm.go:309] [preflight] Running pre-flight checks
	I0415 18:22:24.172431   10384 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0415 18:22:24.172431   10384 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0415 18:22:24.172431   10384 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0415 18:22:24.173023   10384 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0415 18:22:24.177821   10384 out.go:204]   - Generating certificates and keys ...
	I0415 18:22:24.178357   10384 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0415 18:22:24.178482   10384 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-653100 localhost] and IPs [172.19.63.147 127.0.0.1 ::1]
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-653100 localhost] and IPs [172.19.63.147 127.0.0.1 ::1]
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0415 18:22:24.180240   10384 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0415 18:22:24.186302   10384 out.go:204]   - Booting up control plane ...
	I0415 18:22:24.187251   10384 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0415 18:22:24.188243   10384 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0415 18:22:24.188243   10384 kubeadm.go:309] [apiclient] All control plane components are healthy after 9.567962 seconds
	I0415 18:22:24.188243   10384 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0415 18:22:24.188243   10384 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0415 18:22:24.188243   10384 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0415 18:22:24.189243   10384 kubeadm.go:309] [mark-control-plane] Marking the node ha-653100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0415 18:22:24.189243   10384 kubeadm.go:309] [bootstrap-token] Using token: huvy89.hhqbdqsl75p9l7b4
	I0415 18:22:24.194248   10384 out.go:204]   - Configuring RBAC rules ...
	I0415 18:22:24.194248   10384 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0415 18:22:24.194248   10384 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0415 18:22:24.194248   10384 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0415 18:22:24.195682   10384 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0415 18:22:24.195682   10384 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0415 18:22:24.195682   10384 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0415 18:22:24.196372   10384 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0415 18:22:24.196724   10384 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0415 18:22:24.196838   10384 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0415 18:22:24.196838   10384 kubeadm.go:309] 
	I0415 18:22:24.196838   10384 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0415 18:22:24.197084   10384 kubeadm.go:309] 
	I0415 18:22:24.197248   10384 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0415 18:22:24.197248   10384 kubeadm.go:309] 
	I0415 18:22:24.197248   10384 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0415 18:22:24.197432   10384 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0415 18:22:24.197611   10384 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0415 18:22:24.197611   10384 kubeadm.go:309] 
	I0415 18:22:24.197611   10384 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0415 18:22:24.197611   10384 kubeadm.go:309] 
	I0415 18:22:24.197611   10384 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0415 18:22:24.197611   10384 kubeadm.go:309] 
	I0415 18:22:24.197611   10384 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0415 18:22:24.197611   10384 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0415 18:22:24.197611   10384 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0415 18:22:24.198307   10384 kubeadm.go:309] 
	I0415 18:22:24.198307   10384 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0415 18:22:24.198307   10384 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0415 18:22:24.198307   10384 kubeadm.go:309] 
	I0415 18:22:24.198307   10384 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token huvy89.hhqbdqsl75p9l7b4 \
	I0415 18:22:24.198307   10384 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:64433043496e1f3289bbcb278fdd01bdcea4f93c3a13feee16fa4e3db23f59c2 \
	I0415 18:22:24.198307   10384 kubeadm.go:309] 	--control-plane 
	I0415 18:22:24.198307   10384 kubeadm.go:309] 
	I0415 18:22:24.198307   10384 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0415 18:22:24.198307   10384 kubeadm.go:309] 
	I0415 18:22:24.199302   10384 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token huvy89.hhqbdqsl75p9l7b4 \
	I0415 18:22:24.199302   10384 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:64433043496e1f3289bbcb278fdd01bdcea4f93c3a13feee16fa4e3db23f59c2 
	I0415 18:22:24.199302   10384 cni.go:84] Creating CNI manager for ""
	I0415 18:22:24.199302   10384 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 18:22:24.203263   10384 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0415 18:22:24.221247   10384 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0415 18:22:24.229824   10384 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0415 18:22:24.229824   10384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0415 18:22:24.323407   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0415 18:22:25.047319   10384 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0415 18:22:25.062350   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-653100 minikube.k8s.io/updated_at=2024_04_15T18_22_25_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c minikube.k8s.io/name=ha-653100 minikube.k8s.io/primary=true
	I0415 18:22:25.062350   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:25.070326   10384 ops.go:34] apiserver oom_adj: -16
	I0415 18:22:25.284655   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:25.790456   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:26.293504   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:26.795443   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:27.298654   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:27.786190   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:28.286860   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:28.788050   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:29.292845   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:29.794080   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:30.300169   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:30.788471   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:31.295339   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:31.798627   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:32.299958   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:32.791784   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:33.289567   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:33.791349   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:34.295367   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:34.804275   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:35.290745   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:35.794796   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:36.294136   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:36.799771   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:37.029000   10384 kubeadm.go:1107] duration metric: took 11.9815852s to wait for elevateKubeSystemPrivileges
	W0415 18:22:37.029063   10384 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0415 18:22:37.029138   10384 kubeadm.go:393] duration metric: took 30.249681s to StartCluster
	I0415 18:22:37.029138   10384 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:37.029339   10384 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 18:22:37.031101   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:37.032659   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0415 18:22:37.032659   10384 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0415 18:22:37.032732   10384 addons.go:69] Setting storage-provisioner=true in profile "ha-653100"
	I0415 18:22:37.032806   10384 addons.go:234] Setting addon storage-provisioner=true in "ha-653100"
	I0415 18:22:37.032841   10384 addons.go:69] Setting default-storageclass=true in profile "ha-653100"
	I0415 18:22:37.032891   10384 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-653100"
	I0415 18:22:37.032987   10384 host.go:66] Checking if "ha-653100" exists ...
	I0415 18:22:37.032579   10384 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.19.63.147 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 18:22:37.033266   10384 start.go:240] waiting for startup goroutines ...
	I0415 18:22:37.033382   10384 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:22:37.033632   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:37.034694   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:37.253650   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0415 18:22:37.698002   10384 start.go:946] {"host.minikube.internal": 172.19.48.1} host record injected into CoreDNS's ConfigMap
	I0415 18:22:39.456374   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:39.456374   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:39.459088   10384 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 18:22:39.457089   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:39.461772   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:39.461772   10384 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 18:22:39.461772   10384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0415 18:22:39.462029   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:39.463267   10384 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 18:22:39.464063   10384 kapi.go:59] client config for ha-653100: &rest.Config{Host:"https://172.19.63.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-653100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-653100\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f71600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0415 18:22:39.466136   10384 cert_rotation.go:137] Starting client certificate rotation controller
	I0415 18:22:39.466794   10384 addons.go:234] Setting addon default-storageclass=true in "ha-653100"
	I0415 18:22:39.466794   10384 host.go:66] Checking if "ha-653100" exists ...
	I0415 18:22:39.466794   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:41.909905   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:41.909905   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:41.909905   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:22:41.955061   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:41.955061   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:41.955625   10384 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0415 18:22:41.955711   10384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0415 18:22:41.955711   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:44.377012   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:44.377012   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:44.377984   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:22:44.805425   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:22:44.805425   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:44.805425   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:22:44.974779   10384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 18:22:47.155103   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:22:47.155103   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:47.156316   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:22:47.304965   10384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0415 18:22:47.473026   10384 round_trippers.go:463] GET https://172.19.63.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0415 18:22:47.473026   10384 round_trippers.go:469] Request Headers:
	I0415 18:22:47.473026   10384 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:22:47.473026   10384 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:22:47.488496   10384 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0415 18:22:47.490408   10384 round_trippers.go:463] PUT https://172.19.63.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0415 18:22:47.490526   10384 round_trippers.go:469] Request Headers:
	I0415 18:22:47.490526   10384 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:22:47.490526   10384 round_trippers.go:473]     Content-Type: application/json
	I0415 18:22:47.490526   10384 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:22:47.494518   10384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:22:47.498597   10384 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0415 18:22:47.501457   10384 addons.go:505] duration metric: took 10.468136s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0415 18:22:47.501457   10384 start.go:245] waiting for cluster config update ...
	I0415 18:22:47.501457   10384 start.go:254] writing updated cluster config ...
	I0415 18:22:47.503984   10384 out.go:177] 
	I0415 18:22:47.513974   10384 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:22:47.513974   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:22:47.518979   10384 out.go:177] * Starting "ha-653100-m02" control-plane node in "ha-653100" cluster
	I0415 18:22:47.524981   10384 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:22:47.524981   10384 cache.go:56] Caching tarball of preloaded images
	I0415 18:22:47.526030   10384 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 18:22:47.526235   10384 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 18:22:47.526401   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:22:47.528481   10384 start.go:360] acquireMachinesLock for ha-653100-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 18:22:47.528921   10384 start.go:364] duration metric: took 121.6µs to acquireMachinesLock for "ha-653100-m02"
	I0415 18:22:47.529077   10384 start.go:93] Provisioning new machine with config: &{Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-653100 Namespace:default APIServerHAVIP:172.19.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.63.147 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 18:22:47.529280   10384 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0415 18:22:47.540485   10384 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 18:22:47.541556   10384 start.go:159] libmachine.API.Create for "ha-653100" (driver="hyperv")
	I0415 18:22:47.541556   10384 client.go:168] LocalClient.Create starting
	I0415 18:22:47.542079   10384 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0415 18:22:47.542415   10384 main.go:141] libmachine: Decoding PEM data...
	I0415 18:22:47.542415   10384 main.go:141] libmachine: Parsing certificate...
	I0415 18:22:47.542700   10384 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0415 18:22:47.542700   10384 main.go:141] libmachine: Decoding PEM data...
	I0415 18:22:47.542700   10384 main.go:141] libmachine: Parsing certificate...
	I0415 18:22:47.542700   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0415 18:22:49.574978   10384 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0415 18:22:49.574978   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:49.576110   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0415 18:22:51.479178   10384 main.go:141] libmachine: [stdout =====>] : False
	
	I0415 18:22:51.479178   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:51.479600   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 18:22:53.065829   10384 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 18:22:53.066593   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:53.066593   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 18:22:57.052062   10384 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 18:22:57.052234   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:57.055252   10384 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0415 18:22:57.583068   10384 main.go:141] libmachine: Creating SSH key...
	I0415 18:22:57.931279   10384 main.go:141] libmachine: Creating VM...
	I0415 18:22:57.931279   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 18:23:01.081349   10384 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 18:23:01.081349   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:01.082298   10384 main.go:141] libmachine: Using switch "Default Switch"
	I0415 18:23:01.082375   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 18:23:02.972464   10384 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 18:23:02.972464   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:02.972464   10384 main.go:141] libmachine: Creating VHD
	I0415 18:23:02.973018   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0415 18:23:06.989219   10384 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : DEE7E17F-5E93-468C-BA30-08390D1CA178
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0415 18:23:06.989219   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:06.989219   10384 main.go:141] libmachine: Writing magic tar header
	I0415 18:23:06.989219   10384 main.go:141] libmachine: Writing SSH key tar header
	I0415 18:23:06.990286   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0415 18:23:10.344718   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:10.344718   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:10.344872   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\disk.vhd' -SizeBytes 20000MB
	I0415 18:23:13.048066   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:13.048981   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:13.049137   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-653100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0415 18:23:17.000979   10384 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-653100-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0415 18:23:17.000979   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:17.001667   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-653100-m02 -DynamicMemoryEnabled $false
	I0415 18:23:19.529184   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:19.529184   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:19.529184   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-653100-m02 -Count 2
	I0415 18:23:21.929952   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:21.929952   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:21.930071   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-653100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\boot2docker.iso'
	I0415 18:23:24.786919   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:24.786919   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:24.786919   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-653100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\disk.vhd'
	I0415 18:23:27.665809   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:27.665809   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:27.665809   10384 main.go:141] libmachine: Starting VM...
	I0415 18:23:27.666001   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-653100-m02
	I0415 18:23:31.102209   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:31.103144   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:31.103144   10384 main.go:141] libmachine: Waiting for host to start...
	I0415 18:23:31.103144   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:33.569054   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:33.569054   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:33.569054   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:23:36.303048   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:36.303048   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:37.312865   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:39.749364   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:39.749620   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:39.749702   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:23:42.512466   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:42.512842   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:43.518477   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:45.904872   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:45.904872   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:45.905633   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:23:48.594507   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:48.594669   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:49.606615   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:51.980362   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:51.981179   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:51.981179   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:23:54.737668   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:54.738407   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:55.749257   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:58.134602   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:58.134602   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:58.135468   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:00.918915   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:00.919329   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:00.919408   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:03.202618   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:03.202618   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:03.202618   10384 machine.go:94] provisionDockerMachine start ...
	I0415 18:24:03.202618   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:05.548511   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:05.548511   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:05.549191   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:08.289644   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:08.290567   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:08.299809   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:08.300714   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:08.300714   10384 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 18:24:08.446422   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0415 18:24:08.446972   10384 buildroot.go:166] provisioning hostname "ha-653100-m02"
	I0415 18:24:08.446972   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:10.773426   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:10.773426   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:10.773426   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:13.530172   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:13.530172   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:13.536850   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:13.537708   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:13.537708   10384 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-653100-m02 && echo "ha-653100-m02" | sudo tee /etc/hostname
	I0415 18:24:13.707716   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-653100-m02
	
	I0415 18:24:13.707716   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:16.005330   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:16.005330   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:16.005330   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:18.762850   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:18.762850   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:18.770232   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:18.770901   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:18.770901   10384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-653100-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-653100-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-653100-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 18:24:18.936615   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 18:24:18.936615   10384 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0415 18:24:18.937152   10384 buildroot.go:174] setting up certificates
	I0415 18:24:18.937207   10384 provision.go:84] configureAuth start
	I0415 18:24:18.937207   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:21.299996   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:21.299996   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:21.300197   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:24.133316   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:24.134096   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:24.134153   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:26.489254   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:26.489254   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:26.489549   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:29.236160   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:29.236160   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:29.236234   10384 provision.go:143] copyHostCerts
	I0415 18:24:29.236417   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0415 18:24:29.236539   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0415 18:24:29.236539   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0415 18:24:29.237340   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0415 18:24:29.238595   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0415 18:24:29.238972   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0415 18:24:29.238972   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0415 18:24:29.239408   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0415 18:24:29.240639   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0415 18:24:29.240835   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0415 18:24:29.240835   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0415 18:24:29.241419   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0415 18:24:29.242408   10384 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-653100-m02 san=[127.0.0.1 172.19.63.104 ha-653100-m02 localhost minikube]
	I0415 18:24:29.398831   10384 provision.go:177] copyRemoteCerts
	I0415 18:24:29.412927   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 18:24:29.412927   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:31.723514   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:31.723616   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:31.723616   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:34.496654   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:34.496654   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:34.497398   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:24:34.615182   10384 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.2022138s)
	I0415 18:24:34.615182   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0415 18:24:34.615849   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 18:24:34.668445   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0415 18:24:34.668971   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0415 18:24:34.720499   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0415 18:24:34.721156   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0415 18:24:34.770381   10384 provision.go:87] duration metric: took 15.8330476s to configureAuth
	I0415 18:24:34.770381   10384 buildroot.go:189] setting minikube options for container-runtime
	I0415 18:24:34.770381   10384 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:24:34.770381   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:37.079755   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:37.080689   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:37.080689   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:39.859679   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:39.859754   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:39.866117   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:39.866820   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:39.866820   10384 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 18:24:40.015731   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0415 18:24:40.015731   10384 buildroot.go:70] root file system type: tmpfs
	I0415 18:24:40.015731   10384 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 18:24:40.015731   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:42.404944   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:42.405443   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:42.405443   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:45.210326   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:45.210813   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:45.216335   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:45.216939   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:45.216939   10384 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.63.147"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 18:24:45.394927   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.63.147
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 18:24:45.395706   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:47.711900   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:47.712499   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:47.712595   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:50.491344   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:50.491344   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:50.502173   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:50.502173   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:50.502173   10384 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 18:24:52.836243   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0415 18:24:52.836243   10384 machine.go:97] duration metric: took 49.6332282s to provisionDockerMachine
	I0415 18:24:52.836243   10384 client.go:171] duration metric: took 2m5.2936865s to LocalClient.Create
	I0415 18:24:52.836243   10384 start.go:167] duration metric: took 2m5.2936865s to libmachine.API.Create "ha-653100"
	I0415 18:24:52.836243   10384 start.go:293] postStartSetup for "ha-653100-m02" (driver="hyperv")
	I0415 18:24:52.836243   10384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 18:24:52.850899   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 18:24:52.851896   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:55.199036   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:55.199775   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:55.199775   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:58.012510   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:58.013353   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:58.013914   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:24:58.132196   10384 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.2802026s)
	I0415 18:24:58.147452   10384 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 18:24:58.154532   10384 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 18:24:58.154532   10384 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0415 18:24:58.155095   10384 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0415 18:24:58.156186   10384 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> 112722.pem in /etc/ssl/certs
	I0415 18:24:58.156186   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /etc/ssl/certs/112722.pem
	I0415 18:24:58.170256   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 18:24:58.189873   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /etc/ssl/certs/112722.pem (1708 bytes)
	I0415 18:24:58.243032   10384 start.go:296] duration metric: took 5.4067454s for postStartSetup
	I0415 18:24:58.246437   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:00.550399   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:00.550399   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:00.550894   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:03.289044   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:03.289044   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:03.289835   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:25:03.292186   10384 start.go:128] duration metric: took 2m15.7618211s to createHost
	I0415 18:25:03.292186   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:05.668753   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:05.668753   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:05.668966   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:08.439658   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:08.439658   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:08.447000   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:25:08.447864   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:25:08.447864   10384 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0415 18:25:08.589758   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713205508.597287833
	
	I0415 18:25:08.589758   10384 fix.go:216] guest clock: 1713205508.597287833
	I0415 18:25:08.589758   10384 fix.go:229] Guest: 2024-04-15 18:25:08.597287833 +0000 UTC Remote: 2024-04-15 18:25:03.2921862 +0000 UTC m=+360.055147501 (delta=5.305101633s)
	I0415 18:25:08.590328   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:10.915118   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:10.915118   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:10.916067   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:13.650013   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:13.650612   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:13.656497   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:25:13.657104   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:25:13.657182   10384 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713205508
	I0415 18:25:13.813133   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 15 18:25:08 UTC 2024
	
	I0415 18:25:13.813133   10384 fix.go:236] clock set: Mon Apr 15 18:25:08 UTC 2024
	 (err=<nil>)
	I0415 18:25:13.813133   10384 start.go:83] releasing machines lock for "ha-653100-m02", held for 2m26.2829576s
	I0415 18:25:13.813133   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:16.141194   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:16.141380   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:16.141380   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:18.957495   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:18.957495   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:18.960756   10384 out.go:177] * Found network options:
	I0415 18:25:18.964431   10384 out.go:177]   - NO_PROXY=172.19.63.147
	W0415 18:25:18.966627   10384 proxy.go:119] fail to check proxy env: Error ip not in block
	I0415 18:25:18.969406   10384 out.go:177]   - NO_PROXY=172.19.63.147
	W0415 18:25:18.972226   10384 proxy.go:119] fail to check proxy env: Error ip not in block
	W0415 18:25:18.975235   10384 proxy.go:119] fail to check proxy env: Error ip not in block
	I0415 18:25:18.977840   10384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 18:25:18.977840   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:18.990793   10384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0415 18:25:18.990793   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:21.355429   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:21.355429   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:21.355429   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:21.374654   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:21.374654   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:21.374654   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:24.278775   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:24.279572   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:24.280405   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:25:24.306668   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:24.306668   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:24.308123   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:25:24.386474   10384 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.3956377s)
	W0415 18:25:24.386474   10384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 18:25:24.404866   10384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 18:25:24.481327   10384 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 18:25:24.481327   10384 start.go:494] detecting cgroup driver to use...
	I0415 18:25:24.481327   10384 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.5034427s)
	I0415 18:25:24.481327   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:25:24.536359   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0415 18:25:24.572347   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 18:25:24.593352   10384 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 18:25:24.610729   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 18:25:24.650456   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:25:24.693297   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 18:25:24.730594   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:25:24.771078   10384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 18:25:24.812358   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 18:25:24.854948   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 18:25:24.893956   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 18:25:24.934484   10384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 18:25:24.974849   10384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 18:25:25.012928   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:25:25.269094   10384 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 18:25:25.319374   10384 start.go:494] detecting cgroup driver to use...
	I0415 18:25:25.334757   10384 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0415 18:25:25.382030   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:25:25.422509   10384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 18:25:25.496212   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:25:25.539556   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:25:25.586254   10384 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0415 18:25:25.665807   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:25:25.697619   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:25:25.754485   10384 ssh_runner.go:195] Run: which cri-dockerd
	I0415 18:25:25.776463   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0415 18:25:25.798310   10384 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0415 18:25:25.849027   10384 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0415 18:25:26.103040   10384 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0415 18:25:26.311089   10384 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0415 18:25:26.311089   10384 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0415 18:25:26.371946   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:25:26.596000   10384 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 18:26:27.765978   10384 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1694886s)
	I0415 18:26:27.781002   10384 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0415 18:26:27.817233   10384 out.go:177] 
	W0415 18:26:27.820189   10384 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 15 18:24:51 ha-653100-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 15 18:24:51 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:51.175281888Z" level=info msg="Starting up"
	Apr 15 18:24:51 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:51.176817321Z" level=info msg="containerd not running, starting managed containerd"
	Apr 15 18:24:51 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:51.181288215Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.216362257Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243075421Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243180523Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243245725Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243263625Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243358927Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243375528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243544331Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243714535Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243739035Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243751135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243859138Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.244478651Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.247680919Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.247787921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248037026Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248177629Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248295531Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248444935Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248541437Z" level=info msg="metadata content store policy set" policy=shared
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279315587Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279443690Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279651894Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279764797Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279791497Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.280197206Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.280884220Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281341330Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281485733Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281516134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281561035Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281615936Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281641736Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281663737Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281686937Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281709538Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281727638Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281747238Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281777139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281801640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281822540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281844040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281864141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281895342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282030744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282122446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282152747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282178548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282205748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282227849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282250949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282279750Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282310750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282329151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282347551Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282407752Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282432753Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282447653Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282465554Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282584456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282620757Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282637557Z" level=info msg="NRI interface is disabled by configuration."
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.283743481Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.283842283Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.283903984Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.284335093Z" level=info msg="containerd successfully booted in 0.071116s"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.254240790Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.289190582Z" level=info msg="Loading containers: start."
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.609124512Z" level=info msg="Loading containers: done."
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.636265777Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.636518080Z" level=info msg="Daemon has completed initialization"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.840822625Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.841084828Z" level=info msg="API listen on [::]:2376"
	Apr 15 18:24:52 ha-653100-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 15 18:25:26 ha-653100-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.632253775Z" level=info msg="Processing signal 'terminated'"
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.634242462Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.635132157Z" level=info msg="Daemon shutdown complete"
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.635380455Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.635547254Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 15 18:25:27 ha-653100-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 15 18:25:27 ha-653100-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 15 18:25:27 ha-653100-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 15 18:25:27 ha-653100-m02 dockerd[1016]: time="2024-04-15T18:25:27.736568730Z" level=info msg="Starting up"
	Apr 15 18:26:27 ha-653100-m02 dockerd[1016]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 15 18:26:27 ha-653100-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 15 18:26:27 ha-653100-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 15 18:26:27 ha-653100-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0415 18:26:27.820189   10384 out.go:239] * 
	W0415 18:26:27.821891   10384 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 18:26:27.843940   10384 out.go:177] 
	
	
	==> Docker <==
	Apr 15 18:26:50 ha-653100 dockerd[1321]: 2024/04/15 18:26:50 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:26:50 ha-653100 dockerd[1321]: 2024/04/15 18:26:50 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:26:50 ha-653100 dockerd[1321]: 2024/04/15 18:26:50 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:26:50 ha-653100 dockerd[1321]: 2024/04/15 18:26:50 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:26:50 ha-653100 dockerd[1321]: 2024/04/15 18:26:50 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:26:50 ha-653100 dockerd[1321]: 2024/04/15 18:26:50 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:27:06 ha-653100 dockerd[1327]: time="2024-04-15T18:27:06.406838301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 15 18:27:06 ha-653100 dockerd[1327]: time="2024-04-15T18:27:06.407303300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 18:27:06 ha-653100 dockerd[1327]: time="2024-04-15T18:27:06.407328800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:27:06 ha-653100 dockerd[1327]: time="2024-04-15T18:27:06.407457500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:27:06 ha-653100 cri-dockerd[1226]: time="2024-04-15T18:27:06Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4ba88ccaba1a512a72acfefb5864241c5bdcf769724a94eb9e19d7eb09298ffa/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 15 18:27:07 ha-653100 cri-dockerd[1226]: time="2024-04-15T18:27:07Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 15 18:27:08 ha-653100 dockerd[1327]: time="2024-04-15T18:27:08.008356931Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 15 18:27:08 ha-653100 dockerd[1327]: time="2024-04-15T18:27:08.012080171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 18:27:08 ha-653100 dockerd[1327]: time="2024-04-15T18:27:08.012126471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:27:08 ha-653100 dockerd[1327]: time="2024-04-15T18:27:08.012457375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:39:29 ha-653100 dockerd[1321]: 2024/04/15 18:39:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:39:29 ha-653100 dockerd[1321]: 2024/04/15 18:39:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:39:29 ha-653100 dockerd[1321]: 2024/04/15 18:39:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:39:29 ha-653100 dockerd[1321]: 2024/04/15 18:39:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:39:29 ha-653100 dockerd[1321]: 2024/04/15 18:39:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:39:29 ha-653100 dockerd[1321]: 2024/04/15 18:39:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:39:29 ha-653100 dockerd[1321]: 2024/04/15 18:39:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:39:29 ha-653100 dockerd[1321]: 2024/04/15 18:39:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:39:30 ha-653100 dockerd[1321]: 2024/04/15 18:39:30 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3810def19c30b       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   13 minutes ago      Running             busybox                   0                   4ba88ccaba1a5       busybox-7fdf7869d9-5w5x4
	58d38dcc399d7       cbb01a7bd410d                                                                                         17 minutes ago      Running             coredns                   0                   66b040582e9fe       coredns-76f75df574-hz5n2
	7f2e95849717e       6e38f40d628db                                                                                         17 minutes ago      Running             storage-provisioner       0                   41946a72e3913       storage-provisioner
	79df4cc493ccd       cbb01a7bd410d                                                                                         17 minutes ago      Running             coredns                   0                   c2bc3be2dada4       coredns-76f75df574-sw766
	8533539a42fc8       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              17 minutes ago      Running             kindnet-cni               0                   840d4c720c681       kindnet-k8jt8
	ece5eb28b20be       a1d263b5dc5b0                                                                                         17 minutes ago      Running             kube-proxy                0                   590527a253a30       kube-proxy-dgh6m
	0cf5b602fc0c4       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     18 minutes ago      Running             kube-vip                  0                   71c70584ee9c6       kube-vip-ha-653100
	a0697c56404b8       6052a25da3f97                                                                                         18 minutes ago      Running             kube-controller-manager   0                   5c4190df9fb18       kube-controller-manager-ha-653100
	d68da55f0f382       8c390d98f50c0                                                                                         18 minutes ago      Running             kube-scheduler            0                   92e96b6d41bb2       kube-scheduler-ha-653100
	b7958fc0d30b8       39f995c9f1996                                                                                         18 minutes ago      Running             kube-apiserver            0                   a7b3e44514ced       kube-apiserver-ha-653100
	a0fa6c17de399       3861cfcd7c04c                                                                                         18 minutes ago      Running             etcd                      0                   65fe5df3a93dd       etcd-ha-653100
	
	
	==> coredns [58d38dcc399d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e2b9de1191510a72356755223f06623b152d8cdd72ea393cca47fb3d34a5414574050e77e521fd64fc84b7e18fcd0fb5ead79ecf0a5a8be221bd0ffeb8c0080c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45845 - 8967 "HINFO IN 8354542665525626293.2689365418710486320. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.045113649s
	[INFO] 10.244.0.4:51221 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.078581836s
	[INFO] 10.244.0.4:47875 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.52769764s
	[INFO] 10.244.0.4:52717 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000306601s
	[INFO] 10.244.0.4:39163 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.050987688s
	[INFO] 10.244.0.4:37816 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001543s
	[INFO] 10.244.0.4:60144 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.014447825s
	[INFO] 10.244.0.4:55552 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000204001s
	[INFO] 10.244.0.4:36177 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153901s
	[INFO] 10.244.0.4:46410 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000283001s
	[INFO] 10.244.0.4:57190 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168701s
	[INFO] 10.244.0.4:47185 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002385s
	[INFO] 10.244.0.4:34139 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001337s
	[INFO] 10.244.0.4:51029 - 5 "PTR IN 1.48.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000098701s
	
	
	==> coredns [79df4cc493cc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e2b9de1191510a72356755223f06623b152d8cdd72ea393cca47fb3d34a5414574050e77e521fd64fc84b7e18fcd0fb5ead79ecf0a5a8be221bd0ffeb8c0080c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57426 - 12156 "HINFO IN 2507889984284766848.6813386495577107890. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.33687907s
	[INFO] 10.244.0.4:40226 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000337201s
	[INFO] 10.244.0.4:56672 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.049146285s
	[INFO] 10.244.0.4:54337 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001723s
	[INFO] 10.244.0.4:58976 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0002015s
	[INFO] 10.244.0.4:41149 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00024s
	[INFO] 10.244.0.4:37438 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000310601s
	[INFO] 10.244.0.4:54099 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0002466s
	
	
	==> describe nodes <==
	Name:               ha-653100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-653100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c
	                    minikube.k8s.io/name=ha-653100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_15T18_22_25_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 18:22:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-653100
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Apr 2024 18:40:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Apr 2024 18:37:44 +0000   Mon, 15 Apr 2024 18:22:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Apr 2024 18:37:44 +0000   Mon, 15 Apr 2024 18:22:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Apr 2024 18:37:44 +0000   Mon, 15 Apr 2024 18:22:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Apr 2024 18:37:44 +0000   Mon, 15 Apr 2024 18:22:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.63.147
	  Hostname:    ha-653100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 d7ba8367096d4bf9b0e4541361a84287
	  System UUID:                64d5f641-1f2f-ce46-8918-a08d661c1258
	  Boot ID:                    994d41df-0ae9-4f39-ad28-f5e794182c63
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-5w5x4             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-76f75df574-hz5n2             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 coredns-76f75df574-sw766             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 etcd-ha-653100                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-k8jt8                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	  kube-system                 kube-apiserver-ha-653100             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-ha-653100    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-dgh6m                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-ha-653100             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-vip-ha-653100                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 17m   kube-proxy       
	  Normal  Starting                 17m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17m   kubelet          Node ha-653100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m   kubelet          Node ha-653100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m   kubelet          Node ha-653100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17m   node-controller  Node ha-653100 event: Registered Node ha-653100 in Controller
	  Normal  NodeReady                17m   kubelet          Node ha-653100 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.084698] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.376265] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr15 18:21] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.209937] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[ +33.615481] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.104388] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.615924] systemd-fstab-generator[983]: Ignoring "noauto" option for root device
	[  +0.216331] systemd-fstab-generator[995]: Ignoring "noauto" option for root device
	[  +0.260985] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[  +2.876807] systemd-fstab-generator[1179]: Ignoring "noauto" option for root device
	[  +0.212935] systemd-fstab-generator[1191]: Ignoring "noauto" option for root device
	[  +0.227831] systemd-fstab-generator[1203]: Ignoring "noauto" option for root device
	[  +0.311128] systemd-fstab-generator[1218]: Ignoring "noauto" option for root device
	[Apr15 18:22] systemd-fstab-generator[1312]: Ignoring "noauto" option for root device
	[  +0.114802] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.164512] systemd-fstab-generator[1517]: Ignoring "noauto" option for root device
	[  +7.677617] systemd-fstab-generator[1722]: Ignoring "noauto" option for root device
	[  +0.108322] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.774902] kauditd_printk_skb: 67 callbacks suppressed
	[  +5.244487] systemd-fstab-generator[2220]: Ignoring "noauto" option for root device
	[ +14.155639] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.279744] kauditd_printk_skb: 29 callbacks suppressed
	[Apr15 18:27] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [a0fa6c17de39] <==
	{"level":"info","ts":"2024-04-15T18:22:16.371947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"87419fc5adebc62c became leader at term 2"}
	{"level":"info","ts":"2024-04-15T18:22:16.371957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 87419fc5adebc62c elected leader 87419fc5adebc62c at term 2"}
	{"level":"info","ts":"2024-04-15T18:22:16.379846Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"87419fc5adebc62c","local-member-attributes":"{Name:ha-653100 ClientURLs:[https://172.19.63.147:2379]}","request-path":"/0/members/87419fc5adebc62c/attributes","cluster-id":"877b68dea54e79ed","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-15T18:22:16.380125Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T18:22:16.381521Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-15T18:22:16.38789Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-15T18:22:16.387913Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-15T18:22:16.388265Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-15T18:22:16.392534Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"877b68dea54e79ed","local-member-id":"87419fc5adebc62c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T18:22:16.392806Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T18:22:16.393093Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T18:22:16.397755Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-15T18:22:16.407773Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.63.147:2379"}
	{"level":"warn","ts":"2024-04-15T18:22:40.147381Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.732834ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:434"}
	{"level":"info","ts":"2024-04-15T18:22:40.147471Z","caller":"traceutil/trace.go:171","msg":"trace[1597021408] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:383; }","duration":"127.883033ms","start":"2024-04-15T18:22:40.019571Z","end":"2024-04-15T18:22:40.147455Z","steps":["trace[1597021408] 'range keys from in-memory index tree'  (duration: 127.546236ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:22:59.717501Z","caller":"traceutil/trace.go:171","msg":"trace[1969279315] transaction","detail":"{read_only:false; response_revision:463; number_of_response:1; }","duration":"139.509436ms","start":"2024-04-15T18:22:59.57797Z","end":"2024-04-15T18:22:59.71748Z","steps":["trace[1969279315] 'process raft request'  (duration: 139.406534ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:23:00.52288Z","caller":"traceutil/trace.go:171","msg":"trace[385091820] transaction","detail":"{read_only:false; response_revision:465; number_of_response:1; }","duration":"121.870177ms","start":"2024-04-15T18:23:00.400991Z","end":"2024-04-15T18:23:00.522861Z","steps":["trace[385091820] 'process raft request'  (duration: 121.627673ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:32:17.389849Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":950}
	{"level":"info","ts":"2024-04-15T18:32:17.454759Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":950,"took":"64.64366ms","hash":2582305163,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2441216,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-04-15T18:32:17.454923Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2582305163,"revision":950,"compact-revision":-1}
	{"level":"warn","ts":"2024-04-15T18:36:03.263945Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.580472ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1110"}
	{"level":"info","ts":"2024-04-15T18:36:03.26468Z","caller":"traceutil/trace.go:171","msg":"trace[1323778589] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1888; }","duration":"117.331073ms","start":"2024-04-15T18:36:03.147304Z","end":"2024-04-15T18:36:03.264635Z","steps":["trace[1323778589] 'range keys from in-memory index tree'  (duration: 116.483171ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:37:17.412681Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1485}
	{"level":"info","ts":"2024-04-15T18:37:17.429126Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1485,"took":"15.768632ms","hash":3832951504,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1802240,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-04-15T18:37:17.429231Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3832951504,"revision":1485,"compact-revision":950}
	
	
	==> kernel <==
	 18:40:18 up 20 min,  0 users,  load average: 0.25, 0.30, 0.27
	Linux ha-653100 5.10.207 #1 SMP Mon Apr 15 15:01:07 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8533539a42fc] <==
	I0415 18:38:17.244421       1 main.go:227] handling current node
	I0415 18:38:27.250358       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:38:27.250404       1 main.go:227] handling current node
	I0415 18:38:37.259708       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:38:37.259754       1 main.go:227] handling current node
	I0415 18:38:47.266662       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:38:47.266803       1 main.go:227] handling current node
	I0415 18:38:57.278879       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:38:57.278993       1 main.go:227] handling current node
	I0415 18:39:07.293138       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:39:07.293418       1 main.go:227] handling current node
	I0415 18:39:17.310608       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:39:17.310658       1 main.go:227] handling current node
	I0415 18:39:27.316910       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:39:27.317049       1 main.go:227] handling current node
	I0415 18:39:37.323612       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:39:37.323747       1 main.go:227] handling current node
	I0415 18:39:47.336086       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:39:47.336401       1 main.go:227] handling current node
	I0415 18:39:57.348029       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:39:57.348220       1 main.go:227] handling current node
	I0415 18:40:07.356951       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:40:07.357083       1 main.go:227] handling current node
	I0415 18:40:17.364512       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:40:17.364614       1 main.go:227] handling current node
	
	
	==> kube-apiserver [b7958fc0d30b] <==
	I0415 18:22:19.461381       1 controller.go:624] quota admission added evaluator for: namespaces
	I0415 18:22:19.468850       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0415 18:22:19.469135       1 shared_informer.go:318] Caches are synced for configmaps
	I0415 18:22:19.471941       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0415 18:22:19.472253       1 aggregator.go:165] initial CRD sync complete...
	I0415 18:22:19.472339       1 autoregister_controller.go:141] Starting autoregister controller
	I0415 18:22:19.472452       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0415 18:22:19.472462       1 cache.go:39] Caches are synced for autoregister controller
	I0415 18:22:19.498049       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0415 18:22:19.510348       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0415 18:22:20.354035       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0415 18:22:20.363724       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0415 18:22:20.363838       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0415 18:22:21.763949       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0415 18:22:21.866542       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0415 18:22:22.100224       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0415 18:22:22.118571       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.63.147]
	I0415 18:22:22.120605       1 controller.go:624] quota admission added evaluator for: endpoints
	I0415 18:22:22.130952       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0415 18:22:22.385516       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0415 18:22:24.016138       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0415 18:22:24.048032       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0415 18:22:24.081226       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0415 18:22:36.868875       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0415 18:22:36.898745       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [a0697c56404b] <==
	I0415 18:22:37.265766       1 shared_informer.go:318] Caches are synced for garbage collector
	I0415 18:22:37.284118       1 shared_informer.go:318] Caches are synced for garbage collector
	I0415 18:22:37.284549       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0415 18:22:37.317722       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="72.247257ms"
	I0415 18:22:37.317794       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="39.6µs"
	I0415 18:22:49.963920       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="429.397µs"
	I0415 18:22:49.975545       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="117.899µs"
	I0415 18:22:50.009242       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="326.398µs"
	I0415 18:22:50.048064       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="90.899µs"
	I0415 18:22:51.764868       1 node_lifecycle_controller.go:1045] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0415 18:22:52.188891       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="82.302µs"
	I0415 18:22:52.287692       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="37.246165ms"
	I0415 18:22:52.288055       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="224.505µs"
	I0415 18:22:52.333123       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="22.028652ms"
	I0415 18:22:52.333675       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="426.909µs"
	I0415 18:27:05.738408       1 event.go:376] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7fdf7869d9 to 3"
	I0415 18:27:05.789870       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-5w5x4"
	I0415 18:27:05.841032       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-8pgjv"
	I0415 18:27:05.849328       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-tk6sh"
	I0415 18:27:05.899441       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="160.526833ms"
	I0415 18:27:05.957239       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="57.716604ms"
	I0415 18:27:05.998341       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="40.497733ms"
	I0415 18:27:05.998579       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="95.3µs"
	I0415 18:27:09.211983       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="14.949061ms"
	I0415 18:27:09.212464       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="29.3µs"
	
	
	==> kube-proxy [ece5eb28b20b] <==
	I0415 18:22:38.391716       1 server_others.go:72] "Using iptables proxy"
	I0415 18:22:38.407680       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.63.147"]
	I0415 18:22:38.495319       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0415 18:22:38.495346       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0415 18:22:38.495361       1 server_others.go:168] "Using iptables Proxier"
	I0415 18:22:38.500785       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0415 18:22:38.501443       1 server.go:865] "Version info" version="v1.29.3"
	I0415 18:22:38.501468       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0415 18:22:38.503945       1 config.go:188] "Starting service config controller"
	I0415 18:22:38.504041       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0415 18:22:38.504268       1 config.go:97] "Starting endpoint slice config controller"
	I0415 18:22:38.504770       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0415 18:22:38.505829       1 config.go:315] "Starting node config controller"
	I0415 18:22:38.507970       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0415 18:22:38.605316       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0415 18:22:38.605583       1 shared_informer.go:318] Caches are synced for service config
	I0415 18:22:38.608238       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [d68da55f0f38] <==
	W0415 18:22:20.533571       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0415 18:22:20.533671       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0415 18:22:20.559089       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0415 18:22:20.559148       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0415 18:22:20.566941       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0415 18:22:20.569271       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0415 18:22:20.649432       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0415 18:22:20.649545       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0415 18:22:20.680518       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0415 18:22:20.681133       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0415 18:22:20.703015       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0415 18:22:20.703474       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0415 18:22:20.766338       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0415 18:22:20.766458       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0415 18:22:20.789649       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0415 18:22:20.790593       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0415 18:22:20.803334       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0415 18:22:20.804054       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0415 18:22:20.808728       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0415 18:22:20.809130       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0415 18:22:20.838937       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0415 18:22:20.841219       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0415 18:22:20.865287       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0415 18:22:20.865345       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0415 18:22:22.187395       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 15 18:35:24 ha-653100 kubelet[2226]: E0415 18:35:24.245008    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:35:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:35:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:35:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:35:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 18:36:24 ha-653100 kubelet[2226]: E0415 18:36:24.244147    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:36:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:36:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:36:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:36:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 18:37:24 ha-653100 kubelet[2226]: E0415 18:37:24.244721    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:37:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:37:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:37:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:37:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 18:38:24 ha-653100 kubelet[2226]: E0415 18:38:24.244191    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:38:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:38:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:38:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:38:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 18:39:24 ha-653100 kubelet[2226]: E0415 18:39:24.244233    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:39:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:39:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:39:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:39:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [7f2e95849717] <==
	I0415 18:22:51.745766       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0415 18:22:51.775039       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0415 18:22:51.776486       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0415 18:22:51.796625       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0415 18:22:51.797264       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-653100_25d3e2ad-9ea0-4e78-8d19-2cecacd07855!
	I0415 18:22:51.798087       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6b2abca4-b232-44be-91ab-d881b60cfa0a", APIVersion:"v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-653100_25d3e2ad-9ea0-4e78-8d19-2cecacd07855 became leader
	I0415 18:22:51.899439       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-653100_25d3e2ad-9ea0-4e78-8d19-2cecacd07855!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:40:10.019346    9348 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-653100 -n ha-653100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-653100 -n ha-653100: (13.5516998s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-653100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7fdf7869d9-8pgjv busybox-7fdf7869d9-tk6sh
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-653100 describe pod busybox-7fdf7869d9-8pgjv busybox-7fdf7869d9-tk6sh
helpers_test.go:282: (dbg) kubectl --context ha-653100 describe pod busybox-7fdf7869d9-8pgjv busybox-7fdf7869d9-tk6sh:

                                                
                                                
-- stdout --
	Name:             busybox-7fdf7869d9-8pgjv
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7fdf7869d9
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7fdf7869d9
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c4hn5 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-c4hn5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  3m11s (x3 over 13m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	
	
	Name:             busybox-7fdf7869d9-tk6sh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7fdf7869d9
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7fdf7869d9
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rjshx (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-rjshx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  3m11s (x3 over 13m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (49.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (285.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-653100 -v=7 --alsologtostderr
E0415 18:41:36.767476   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
E0415 18:41:53.553609   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-653100 -v=7 --alsologtostderr: (3m30.5954121s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-653100 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-653100 status -v=7 --alsologtostderr: exit status 2 (38.4375434s)

                                                
                                                
-- stdout --
	ha-653100
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-653100-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-653100-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:44:04.937671    5028 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0415 18:44:05.031071    5028 out.go:291] Setting OutFile to fd 864 ...
	I0415 18:44:05.031712    5028 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:44:05.031712    5028 out.go:304] Setting ErrFile to fd 672...
	I0415 18:44:05.031712    5028 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:44:05.049682    5028 out.go:298] Setting JSON to false
	I0415 18:44:05.049775    5028 mustload.go:65] Loading cluster: ha-653100
	I0415 18:44:05.049775    5028 notify.go:220] Checking for updates...
	I0415 18:44:05.050723    5028 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:44:05.050723    5028 status.go:255] checking status of ha-653100 ...
	I0415 18:44:05.051650    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:44:07.387715    5028 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:44:07.388240    5028 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:44:07.388464    5028 status.go:330] ha-653100 host status = "Running" (err=<nil>)
	I0415 18:44:07.388464    5028 host.go:66] Checking if "ha-653100" exists ...
	I0415 18:44:07.388663    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:44:09.692916    5028 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:44:09.692916    5028 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:44:09.692916    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:44:12.454502    5028 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:44:12.454502    5028 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:44:12.455365    5028 host.go:66] Checking if "ha-653100" exists ...
	I0415 18:44:12.469830    5028 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 18:44:12.469830    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:44:14.746676    5028 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:44:14.747076    5028 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:44:14.747076    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:44:17.504864    5028 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:44:17.504864    5028 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:44:17.505774    5028 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:44:17.615684    5028 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.1458118s)
	I0415 18:44:17.630271    5028 ssh_runner.go:195] Run: systemctl --version
	I0415 18:44:17.657724    5028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 18:44:17.685892    5028 kubeconfig.go:125] found "ha-653100" server: "https://172.19.63.254:8443"
	I0415 18:44:17.686081    5028 api_server.go:166] Checking apiserver status ...
	I0415 18:44:17.701162    5028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 18:44:17.743830    5028 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2037/cgroup
	W0415 18:44:17.767360    5028 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2037/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0415 18:44:17.783520    5028 ssh_runner.go:195] Run: ls
	I0415 18:44:17.791400    5028 api_server.go:253] Checking apiserver healthz at https://172.19.63.254:8443/healthz ...
	I0415 18:44:17.800201    5028 api_server.go:279] https://172.19.63.254:8443/healthz returned 200:
	ok
	I0415 18:44:17.800319    5028 status.go:422] ha-653100 apiserver status = Running (err=<nil>)
	I0415 18:44:17.800319    5028 status.go:257] ha-653100 status: &{Name:ha-653100 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 18:44:17.800359    5028 status.go:255] checking status of ha-653100-m02 ...
	I0415 18:44:17.801201    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:44:20.096972    5028 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:44:20.096972    5028 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:44:20.097077    5028 status.go:330] ha-653100-m02 host status = "Running" (err=<nil>)
	I0415 18:44:20.097077    5028 host.go:66] Checking if "ha-653100-m02" exists ...
	I0415 18:44:20.098614    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:44:22.441803    5028 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:44:22.441803    5028 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:44:22.442814    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:44:25.228494    5028 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:44:25.228494    5028 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:44:25.228494    5028 host.go:66] Checking if "ha-653100-m02" exists ...
	I0415 18:44:25.246220    5028 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 18:44:25.246220    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:44:27.546498    5028 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:44:27.546734    5028 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:44:27.546821    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:44:30.292901    5028 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:44:30.292901    5028 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:44:30.293255    5028 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:44:30.403592    5028 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.1572008s)
	I0415 18:44:30.416556    5028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 18:44:30.449507    5028 kubeconfig.go:125] found "ha-653100" server: "https://172.19.63.254:8443"
	I0415 18:44:30.449653    5028 api_server.go:166] Checking apiserver status ...
	I0415 18:44:30.463415    5028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0415 18:44:30.492833    5028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0415 18:44:30.492833    5028 status.go:422] ha-653100-m02 apiserver status = Stopped (err=<nil>)
	I0415 18:44:30.492833    5028 status.go:257] ha-653100-m02 status: &{Name:ha-653100-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 18:44:30.492932    5028 status.go:255] checking status of ha-653100-m03 ...
	I0415 18:44:30.493230    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m03 ).state
	I0415 18:44:32.818381    5028 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:44:32.818381    5028 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:44:32.818977    5028 status.go:330] ha-653100-m03 host status = "Running" (err=<nil>)
	I0415 18:44:32.818977    5028 host.go:66] Checking if "ha-653100-m03" exists ...
	I0415 18:44:32.819284    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m03 ).state
	I0415 18:44:35.150539    5028 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:44:35.150887    5028 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:44:35.151111    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m03 ).networkadapters[0]).ipaddresses[0]
	I0415 18:44:37.920289    5028 main.go:141] libmachine: [stdout =====>] : 172.19.51.108
	
	I0415 18:44:37.920289    5028 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:44:37.920676    5028 host.go:66] Checking if "ha-653100-m03" exists ...
	I0415 18:44:37.935735    5028 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 18:44:37.935735    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m03 ).state
	I0415 18:44:40.246770    5028 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:44:40.246770    5028 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:44:40.246899    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m03 ).networkadapters[0]).ipaddresses[0]
	I0415 18:44:43.058052    5028 main.go:141] libmachine: [stdout =====>] : 172.19.51.108
	
	I0415 18:44:43.058052    5028 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:44:43.059237    5028 sshutil.go:53] new ssh client: &{IP:172.19.51.108 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m03\id_rsa Username:docker}
	I0415 18:44:43.163782    5028 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.2280037s)
	I0415 18:44:43.177788    5028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 18:44:43.204136    5028 status.go:257] ha-653100-m03 status: &{Name:ha-653100-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:236: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-653100 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-653100 -n ha-653100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-653100 -n ha-653100: (13.2612708s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-653100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-653100 logs -n 25: (9.0067582s)
helpers_test.go:252: TestMultiControlPlane/serial/AddWorkerNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:38 UTC | 15 Apr 24 18:38 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:38 UTC | 15 Apr 24 18:38 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | busybox-7fdf7869d9-5w5x4 --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-8pgjv --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-tk6sh --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | busybox-7fdf7869d9-5w5x4 --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-8pgjv --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-tk6sh --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | busybox-7fdf7869d9-5w5x4 -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-8pgjv -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-tk6sh -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | busybox-7fdf7869d9-5w5x4             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-5w5x4 -- sh       |           |                   |                |                     |                     |
	|         | -c ping -c 1 172.19.48.1             |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-8pgjv             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-tk6sh             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| node    | add -p ha-653100 -v=7                | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:40 UTC | 15 Apr 24 18:44 UTC |
	|         | --alsologtostderr                    |           |                   |                |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 18:19:03
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 18:19:03.428900   10384 out.go:291] Setting OutFile to fd 956 ...
	I0415 18:19:03.429535   10384 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:19:03.429535   10384 out.go:304] Setting ErrFile to fd 892...
	I0415 18:19:03.429535   10384 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:19:03.456152   10384 out.go:298] Setting JSON to false
	I0415 18:19:03.460969   10384 start.go:129] hostinfo: {"hostname":"minikube6","uptime":16870,"bootTime":1713188273,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0415 18:19:03.460969   10384 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 18:19:03.468944   10384 out.go:177] * [ha-653100] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0415 18:19:03.471713   10384 notify.go:220] Checking for updates...
	I0415 18:19:03.474175   10384 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 18:19:03.479852   10384 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 18:19:03.482821   10384 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0415 18:19:03.485193   10384 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 18:19:03.488098   10384 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 18:19:03.491472   10384 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 18:19:09.177227   10384 out.go:177] * Using the hyperv driver based on user configuration
	I0415 18:19:09.180711   10384 start.go:297] selected driver: hyperv
	I0415 18:19:09.180711   10384 start.go:901] validating driver "hyperv" against <nil>
	I0415 18:19:09.180711   10384 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 18:19:09.231415   10384 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 18:19:09.233116   10384 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 18:19:09.233296   10384 cni.go:84] Creating CNI manager for ""
	I0415 18:19:09.233296   10384 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0415 18:19:09.233296   10384 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0415 18:19:09.233503   10384 start.go:340] cluster config:
	{Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-653100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 18:19:09.233896   10384 iso.go:125] acquiring lock: {Name:mkb11aac800c033551a31c7a773c0461f92e4459 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 18:19:09.237716   10384 out.go:177] * Starting "ha-653100" primary control-plane node in "ha-653100" cluster
	I0415 18:19:09.241624   10384 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:19:09.241887   10384 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 18:19:09.241939   10384 cache.go:56] Caching tarball of preloaded images
	I0415 18:19:09.242318   10384 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 18:19:09.242373   10384 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 18:19:09.243280   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:19:09.243280   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json: {Name:mk9fcf3e86096a1c3d878c2c5f55d5a5acd00e38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:19:09.244971   10384 start.go:360] acquireMachinesLock for ha-653100: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 18:19:09.244971   10384 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-653100"
	I0415 18:19:09.244971   10384 start.go:93] Provisioning new machine with config: &{Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-653100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 18:19:09.244971   10384 start.go:125] createHost starting for "" (driver="hyperv")
	I0415 18:19:09.247899   10384 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 18:19:09.247899   10384 start.go:159] libmachine.API.Create for "ha-653100" (driver="hyperv")
	I0415 18:19:09.247899   10384 client.go:168] LocalClient.Create starting
	I0415 18:19:09.248830   10384 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0415 18:19:09.249101   10384 main.go:141] libmachine: Decoding PEM data...
	I0415 18:19:09.249148   10384 main.go:141] libmachine: Parsing certificate...
	I0415 18:19:09.249148   10384 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0415 18:19:09.249148   10384 main.go:141] libmachine: Decoding PEM data...
	I0415 18:19:09.249148   10384 main.go:141] libmachine: Parsing certificate...
	I0415 18:19:09.249731   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0415 18:19:11.419777   10384 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0415 18:19:11.419777   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:11.420812   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0415 18:19:13.280108   10384 main.go:141] libmachine: [stdout =====>] : False
	
	I0415 18:19:13.280108   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:13.280637   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 18:19:14.855241   10384 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 18:19:14.855241   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:14.855241   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 18:19:18.733923   10384 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 18:19:18.734210   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:18.736243   10384 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0415 18:19:19.289879   10384 main.go:141] libmachine: Creating SSH key...
	I0415 18:19:19.400622   10384 main.go:141] libmachine: Creating VM...
	I0415 18:19:19.400622   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 18:19:22.473592   10384 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 18:19:22.473592   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:22.473592   10384 main.go:141] libmachine: Using switch "Default Switch"
	I0415 18:19:22.473592   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 18:19:24.358372   10384 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 18:19:24.358573   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:24.358573   10384 main.go:141] libmachine: Creating VHD
	I0415 18:19:24.358573   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\fixed.vhd' -SizeBytes 10MB -Fixed
	I0415 18:19:28.369440   10384 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 650E0F4D-34EC-4EE4-B011-F395B7FC2B3C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0415 18:19:28.369525   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:28.369525   10384 main.go:141] libmachine: Writing magic tar header
	I0415 18:19:28.369609   10384 main.go:141] libmachine: Writing SSH key tar header
	I0415 18:19:28.379115   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\disk.vhd' -VHDType Dynamic -DeleteSource
	I0415 18:19:31.701668   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:31.701668   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:31.702065   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\disk.vhd' -SizeBytes 20000MB
	I0415 18:19:34.409230   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:34.409287   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:34.409287   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-653100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0415 18:19:38.391213   10384 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-653100 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0415 18:19:38.391365   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:38.391448   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-653100 -DynamicMemoryEnabled $false
	I0415 18:19:40.850920   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:40.851446   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:40.851446   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-653100 -Count 2
	I0415 18:19:43.184748   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:43.185230   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:43.185314   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-653100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\boot2docker.iso'
	I0415 18:19:45.947867   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:45.947867   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:45.948906   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-653100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\disk.vhd'
	I0415 18:19:48.807697   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:48.807697   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:48.807697   10384 main.go:141] libmachine: Starting VM...
	I0415 18:19:48.808056   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-653100
	I0415 18:19:52.116173   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:52.117205   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:52.117205   10384 main.go:141] libmachine: Waiting for host to start...
	I0415 18:19:52.117276   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:19:54.557809   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:19:54.558376   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:54.558452   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:19:57.250722   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:57.250722   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:58.258291   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:00.584210   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:00.584210   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:00.584448   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:03.246620   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:20:03.247582   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:04.255962   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:06.600399   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:06.600459   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:06.600459   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:09.316612   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:20:09.316612   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:10.317022   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:12.741666   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:12.741972   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:12.742046   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:15.418020   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:20:15.418020   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:16.427460   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:18.790469   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:18.790783   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:18.790783   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:21.596566   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:21.596566   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:21.597345   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:23.951579   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:23.951579   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:23.951579   10384 machine.go:94] provisionDockerMachine start ...
	I0415 18:20:23.952606   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:26.247912   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:26.248135   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:26.248135   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:29.012297   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:29.012297   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:29.019039   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:20:29.032591   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:20:29.032673   10384 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 18:20:29.165965   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0415 18:20:29.165965   10384 buildroot.go:166] provisioning hostname "ha-653100"
	I0415 18:20:29.165965   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:31.462885   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:31.462885   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:31.462973   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:34.155427   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:34.156301   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:34.162944   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:20:34.163526   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:20:34.163526   10384 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-653100 && echo "ha-653100" | sudo tee /etc/hostname
	I0415 18:20:34.337418   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-653100
	
	I0415 18:20:34.337418   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:36.655518   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:36.655518   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:36.655812   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:39.380784   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:39.380784   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:39.389453   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:20:39.390401   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:20:39.390401   10384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-653100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-653100/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-653100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 18:20:39.543028   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 18:20:39.543028   10384 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0415 18:20:39.543028   10384 buildroot.go:174] setting up certificates
	I0415 18:20:39.543028   10384 provision.go:84] configureAuth start
	I0415 18:20:39.543611   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:41.851405   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:41.851611   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:41.851695   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:44.624640   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:44.624640   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:44.624640   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:46.878650   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:46.878650   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:46.879166   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:49.633681   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:49.633926   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:49.633926   10384 provision.go:143] copyHostCerts
	I0415 18:20:49.633926   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0415 18:20:49.634462   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0415 18:20:49.634462   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0415 18:20:49.635297   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0415 18:20:49.637549   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0415 18:20:49.637813   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0415 18:20:49.637813   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0415 18:20:49.637813   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0415 18:20:49.639233   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0415 18:20:49.639233   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0415 18:20:49.639233   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0415 18:20:49.639935   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0415 18:20:49.640957   10384 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-653100 san=[127.0.0.1 172.19.63.147 ha-653100 localhost minikube]
	I0415 18:20:49.905880   10384 provision.go:177] copyRemoteCerts
	I0415 18:20:49.922553   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 18:20:49.922553   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:52.259882   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:52.259882   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:52.259882   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:54.984473   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:54.984473   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:54.984987   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:20:55.101879   10384 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1791462s)
	I0415 18:20:55.101879   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0415 18:20:55.102059   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 18:20:55.153442   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0415 18:20:55.153917   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0415 18:20:55.199876   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0415 18:20:55.200448   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0415 18:20:55.254511   10384 provision.go:87] duration metric: took 15.7112643s to configureAuth
	I0415 18:20:55.254511   10384 buildroot.go:189] setting minikube options for container-runtime
	I0415 18:20:55.255352   10384 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:20:55.255474   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:57.547699   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:57.547699   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:57.547786   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:00.303241   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:00.303241   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:00.309852   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:00.310680   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:00.310680   10384 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 18:21:00.455641   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0415 18:21:00.455641   10384 buildroot.go:70] root file system type: tmpfs
	I0415 18:21:00.455641   10384 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 18:21:00.455641   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:02.740065   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:02.740065   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:02.740841   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:05.487209   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:05.487209   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:05.492437   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:05.493558   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:05.493558   10384 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 18:21:05.663243   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 18:21:05.663359   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:07.945804   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:07.946031   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:07.946031   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:10.668442   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:10.668442   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:10.674981   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:10.675100   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:10.675100   10384 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 18:21:12.959357   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0415 18:21:12.959357   10384 machine.go:97] duration metric: took 49.0073804s to provisionDockerMachine
	I0415 18:21:12.959357   10384 client.go:171] duration metric: took 2m3.7104605s to LocalClient.Create
	I0415 18:21:12.959357   10384 start.go:167] duration metric: took 2m3.7104605s to libmachine.API.Create "ha-653100"
	I0415 18:21:12.959357   10384 start.go:293] postStartSetup for "ha-653100" (driver="hyperv")
	I0415 18:21:12.959357   10384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 18:21:12.974666   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 18:21:12.974666   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:15.275980   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:15.275980   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:15.275980   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:18.019740   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:18.019762   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:18.019878   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:21:18.139960   10384 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1652527s)
	I0415 18:21:18.155380   10384 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 18:21:18.164559   10384 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 18:21:18.164559   10384 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0415 18:21:18.165434   10384 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0415 18:21:18.166112   10384 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> 112722.pem in /etc/ssl/certs
	I0415 18:21:18.166112   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /etc/ssl/certs/112722.pem
	I0415 18:21:18.180084   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 18:21:18.200844   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /etc/ssl/certs/112722.pem (1708 bytes)
	I0415 18:21:18.250132   10384 start.go:296] duration metric: took 5.2907331s for postStartSetup
	I0415 18:21:18.253937   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:20.531894   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:20.531894   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:20.531894   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:23.259067   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:23.259480   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:23.259754   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:21:23.262894   10384 start.go:128] duration metric: took 2m14.0167978s to createHost
	I0415 18:21:23.262950   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:25.573334   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:25.573334   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:25.573334   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:28.294984   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:28.295213   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:28.304032   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:28.304955   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:28.304955   10384 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0415 18:21:28.441121   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713205288.448859419
	
	I0415 18:21:28.441191   10384 fix.go:216] guest clock: 1713205288.448859419
	I0415 18:21:28.441191   10384 fix.go:229] Guest: 2024-04-15 18:21:28.448859419 +0000 UTC Remote: 2024-04-15 18:21:23.2629505 +0000 UTC m=+140.027670501 (delta=5.185908919s)
	I0415 18:21:28.441272   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:30.726887   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:30.727164   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:30.727164   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:33.517730   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:33.518861   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:33.525281   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:33.525856   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:33.525856   10384 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713205288
	I0415 18:21:33.684173   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 15 18:21:28 UTC 2024
	
	I0415 18:21:33.684173   10384 fix.go:236] clock set: Mon Apr 15 18:21:28 UTC 2024
	 (err=<nil>)
	I0415 18:21:33.684173   10384 start.go:83] releasing machines lock for "ha-653100", held for 2m24.4380391s
	I0415 18:21:33.684173   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:35.959004   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:35.959004   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:35.959004   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:38.693038   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:38.693586   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:38.698246   10384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 18:21:38.698432   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:38.709918   10384 ssh_runner.go:195] Run: cat /version.json
	I0415 18:21:38.709918   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:41.102451   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:43.920818   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:43.920972   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:43.921214   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:21:43.967273   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:43.967331   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:43.967331   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:21:44.091517   10384 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3921768s)
	I0415 18:21:44.091595   10384 ssh_runner.go:235] Completed: cat /version.json: (5.3815555s)
	I0415 18:21:44.105965   10384 ssh_runner.go:195] Run: systemctl --version
	I0415 18:21:44.128397   10384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0415 18:21:44.135680   10384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 18:21:44.149066   10384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 18:21:44.177790   10384 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 18:21:44.177790   10384 start.go:494] detecting cgroup driver to use...
	I0415 18:21:44.177790   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:21:44.228163   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0415 18:21:44.262529   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 18:21:44.285370   10384 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 18:21:44.301154   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 18:21:44.336472   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:21:44.370998   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 18:21:44.404889   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:21:44.438672   10384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 18:21:44.473968   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 18:21:44.507568   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 18:21:44.541278   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 18:21:44.574748   10384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 18:21:44.615798   10384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 18:21:44.656765   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:21:44.866329   10384 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 18:21:44.902355   10384 start.go:494] detecting cgroup driver to use...
	I0415 18:21:44.917364   10384 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0415 18:21:44.958576   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:21:44.995083   10384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 18:21:45.045436   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:21:45.084274   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:21:45.126708   10384 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0415 18:21:45.197837   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:21:45.224449   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:21:45.274212   10384 ssh_runner.go:195] Run: which cri-dockerd
	I0415 18:21:45.295670   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0415 18:21:45.317816   10384 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0415 18:21:45.364867   10384 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0415 18:21:45.594504   10384 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0415 18:21:45.794998   10384 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0415 18:21:45.795406   10384 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0415 18:21:45.851288   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:21:46.067106   10384 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 18:21:48.625712   10384 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5574711s)
	I0415 18:21:48.640151   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0415 18:21:48.681058   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 18:21:48.721545   10384 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0415 18:21:48.945328   10384 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0415 18:21:49.172462   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:21:49.400402   10384 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0415 18:21:49.448539   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 18:21:49.489496   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:21:49.703253   10384 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0415 18:21:49.816658   10384 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0415 18:21:49.830904   10384 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0415 18:21:49.840743   10384 start.go:562] Will wait 60s for crictl version
	I0415 18:21:49.855288   10384 ssh_runner.go:195] Run: which crictl
	I0415 18:21:49.875869   10384 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0415 18:21:49.936713   10384 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0415 18:21:49.947981   10384 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 18:21:49.993965   10384 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 18:21:50.032420   10384 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0415 18:21:50.032553   10384 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0415 18:21:50.037021   10384 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0415 18:21:50.037021   10384 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0415 18:21:50.037021   10384 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0415 18:21:50.037021   10384 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:d7:0e Flags:up|broadcast|multicast|running}
	I0415 18:21:50.039971   10384 ip.go:210] interface addr: fe80::6b0:6318:bc6e:fcda/64
	I0415 18:21:50.039971   10384 ip.go:210] interface addr: 172.19.48.1/20
	I0415 18:21:50.056064   10384 ssh_runner.go:195] Run: grep 172.19.48.1	host.minikube.internal$ /etc/hosts
	I0415 18:21:50.062649   10384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 18:21:50.097930   10384 kubeadm.go:877] updating cluster {Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:ha-653100 Namespace:default APIServerHAVIP:172.19.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.63.147 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0415 18:21:50.097930   10384 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:21:50.108473   10384 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 18:21:50.131644   10384 docker.go:685] Got preloaded images: 
	I0415 18:21:50.132600   10384 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0415 18:21:50.146104   10384 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0415 18:21:50.181885   10384 ssh_runner.go:195] Run: which lz4
	I0415 18:21:50.188111   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0415 18:21:50.202072   10384 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0415 18:21:50.209107   10384 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0415 18:21:50.209107   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0415 18:21:52.413614   10384 docker.go:649] duration metric: took 2.2254854s to copy over tarball
	I0415 18:21:52.429279   10384 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0415 18:22:01.379987   10384 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.9504893s)
	I0415 18:22:01.379987   10384 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0415 18:22:01.455511   10384 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0415 18:22:01.477182   10384 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0415 18:22:01.536289   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:22:01.768214   10384 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 18:22:04.398301   10384 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6300657s)
	I0415 18:22:04.408551   10384 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 18:22:04.433417   10384 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0415 18:22:04.433417   10384 cache_images.go:84] Images are preloaded, skipping loading
	I0415 18:22:04.433417   10384 kubeadm.go:928] updating node { 172.19.63.147 8443 v1.29.3 docker true true} ...
	I0415 18:22:04.433417   10384 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-653100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.63.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-653100 Namespace:default APIServerHAVIP:172.19.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0415 18:22:04.444220   10384 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0415 18:22:04.490342   10384 cni.go:84] Creating CNI manager for ""
	I0415 18:22:04.490402   10384 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 18:22:04.490472   10384 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0415 18:22:04.490526   10384 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.63.147 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-653100 NodeName:ha-653100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.63.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.63.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0415 18:22:04.490735   10384 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.63.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-653100"
	  kubeletExtraArgs:
	    node-ip: 172.19.63.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.63.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0415 18:22:04.490884   10384 kube-vip.go:111] generating kube-vip config ...
	I0415 18:22:04.505496   10384 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0415 18:22:04.536495   10384 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0415 18:22:04.536752   10384 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.63.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0415 18:22:04.551207   10384 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0415 18:22:04.567905   10384 binaries.go:44] Found k8s binaries, skipping transfer
	I0415 18:22:04.582348   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0415 18:22:04.604171   10384 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0415 18:22:04.646000   10384 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0415 18:22:04.692832   10384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0415 18:22:04.728604   10384 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1351 bytes)
	I0415 18:22:04.775922   10384 ssh_runner.go:195] Run: grep 172.19.63.254	control-plane.minikube.internal$ /etc/hosts
	I0415 18:22:04.783742   10384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.63.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 18:22:04.822733   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:22:05.055746   10384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 18:22:05.087598   10384 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100 for IP: 172.19.63.147
	I0415 18:22:05.087652   10384 certs.go:194] generating shared ca certs ...
	I0415 18:22:05.087652   10384 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.088303   10384 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0415 18:22:05.088303   10384 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0415 18:22:05.088915   10384 certs.go:256] generating profile certs ...
	I0415 18:22:05.089546   10384 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.key
	I0415 18:22:05.089739   10384 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.crt with IP's: []
	I0415 18:22:05.327013   10384 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.crt ...
	I0415 18:22:05.328010   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.crt: {Name:mka413e653e113856769234a348385e515e46303 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.329372   10384 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.key ...
	I0415 18:22:05.329372   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.key: {Name:mk12a79d6acd7fec5ddd98754bb23ab16e83b4f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.330112   10384 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key.c151ec5c
	I0415 18:22:05.331447   10384 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt.c151ec5c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.63.147 172.19.63.254]
	I0415 18:22:05.565428   10384 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt.c151ec5c ...
	I0415 18:22:05.565428   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt.c151ec5c: {Name:mk5c523ee813d33697660e99fb5da48b385701b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.567434   10384 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key.c151ec5c ...
	I0415 18:22:05.567434   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key.c151ec5c: {Name:mkeadeed87d8879714bf8100a4229bec1246f570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.568511   10384 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt.c151ec5c -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt
	I0415 18:22:05.585425   10384 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key.c151ec5c -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key
	I0415 18:22:05.586963   10384 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key
	I0415 18:22:05.587129   10384 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt with IP's: []
	I0415 18:22:05.748042   10384 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt ...
	I0415 18:22:05.749020   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt: {Name:mk92c7defdccaf790f51e1080d3836b064a3ba9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.749736   10384 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key ...
	I0415 18:22:05.749736   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key: {Name:mk071663552007da34f935841f25d643d746d544 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.751078   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0415 18:22:05.751078   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0415 18:22:05.751078   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0415 18:22:05.751078   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0415 18:22:05.752108   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0415 18:22:05.752265   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0415 18:22:05.752517   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0415 18:22:05.761320   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0415 18:22:05.761625   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem (1338 bytes)
	W0415 18:22:05.762397   10384 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272_empty.pem, impossibly tiny 0 bytes
	I0415 18:22:05.762397   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0415 18:22:05.762397   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0415 18:22:05.763315   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0415 18:22:05.763315   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0415 18:22:05.764136   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem (1708 bytes)
	I0415 18:22:05.764433   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /usr/share/ca-certificates/112722.pem
	I0415 18:22:05.764684   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:22:05.764840   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem -> /usr/share/ca-certificates/11272.pem
	I0415 18:22:05.766228   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0415 18:22:05.818285   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0415 18:22:05.869100   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0415 18:22:05.927943   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0415 18:22:05.982236   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0415 18:22:06.033436   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0415 18:22:06.088918   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0415 18:22:06.140228   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0415 18:22:06.194914   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /usr/share/ca-certificates/112722.pem (1708 bytes)
	I0415 18:22:06.244585   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0415 18:22:06.295695   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem --> /usr/share/ca-certificates/11272.pem (1338 bytes)
	I0415 18:22:06.348962   10384 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0415 18:22:06.398272   10384 ssh_runner.go:195] Run: openssl version
	I0415 18:22:06.422630   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112722.pem && ln -fs /usr/share/ca-certificates/112722.pem /etc/ssl/certs/112722.pem"
	I0415 18:22:06.459842   10384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112722.pem
	I0415 18:22:06.467290   10384 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 17:58 /usr/share/ca-certificates/112722.pem
	I0415 18:22:06.480612   10384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112722.pem
	I0415 18:22:06.503535   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112722.pem /etc/ssl/certs/3ec20f2e.0"
	I0415 18:22:06.538561   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0415 18:22:06.572574   10384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:22:06.580950   10384 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:22:06.595127   10384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:22:06.618634   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0415 18:22:06.655478   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11272.pem && ln -fs /usr/share/ca-certificates/11272.pem /etc/ssl/certs/11272.pem"
	I0415 18:22:06.690402   10384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11272.pem
	I0415 18:22:06.698649   10384 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 17:58 /usr/share/ca-certificates/11272.pem
	I0415 18:22:06.712709   10384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11272.pem
	I0415 18:22:06.735899   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11272.pem /etc/ssl/certs/51391683.0"
	I0415 18:22:06.771243   10384 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 18:22:06.778754   10384 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0415 18:22:06.779215   10384 kubeadm.go:391] StartCluster: {Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clu
sterName:ha-653100 Namespace:default APIServerHAVIP:172.19.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.63.147 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 18:22:06.790653   10384 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0415 18:22:06.830974   10384 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0415 18:22:06.866829   10384 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0415 18:22:06.900593   10384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0415 18:22:06.925579   10384 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0415 18:22:06.925579   10384 kubeadm.go:156] found existing configuration files:
	
	I0415 18:22:06.940209   10384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0415 18:22:06.959148   10384 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0415 18:22:06.975145   10384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0415 18:22:07.014822   10384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0415 18:22:07.031944   10384 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0415 18:22:07.045919   10384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0415 18:22:07.081479   10384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0415 18:22:07.104063   10384 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0415 18:22:07.117753   10384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0415 18:22:07.151118   10384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0415 18:22:07.171678   10384 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0415 18:22:07.187200   10384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0415 18:22:07.206408   10384 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0415 18:22:07.712971   10384 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0415 18:22:24.172226   10384 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0415 18:22:24.172397   10384 kubeadm.go:309] [preflight] Running pre-flight checks
	I0415 18:22:24.172431   10384 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0415 18:22:24.172431   10384 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0415 18:22:24.172431   10384 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0415 18:22:24.173023   10384 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0415 18:22:24.177821   10384 out.go:204]   - Generating certificates and keys ...
	I0415 18:22:24.178357   10384 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0415 18:22:24.178482   10384 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-653100 localhost] and IPs [172.19.63.147 127.0.0.1 ::1]
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-653100 localhost] and IPs [172.19.63.147 127.0.0.1 ::1]
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0415 18:22:24.180240   10384 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0415 18:22:24.186302   10384 out.go:204]   - Booting up control plane ...
	I0415 18:22:24.187251   10384 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0415 18:22:24.188243   10384 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0415 18:22:24.188243   10384 kubeadm.go:309] [apiclient] All control plane components are healthy after 9.567962 seconds
	I0415 18:22:24.188243   10384 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0415 18:22:24.188243   10384 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0415 18:22:24.188243   10384 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0415 18:22:24.189243   10384 kubeadm.go:309] [mark-control-plane] Marking the node ha-653100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0415 18:22:24.189243   10384 kubeadm.go:309] [bootstrap-token] Using token: huvy89.hhqbdqsl75p9l7b4
	I0415 18:22:24.194248   10384 out.go:204]   - Configuring RBAC rules ...
	I0415 18:22:24.194248   10384 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0415 18:22:24.194248   10384 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0415 18:22:24.194248   10384 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0415 18:22:24.195682   10384 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0415 18:22:24.195682   10384 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0415 18:22:24.195682   10384 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0415 18:22:24.196372   10384 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0415 18:22:24.196724   10384 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0415 18:22:24.196838   10384 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0415 18:22:24.196838   10384 kubeadm.go:309] 
	I0415 18:22:24.196838   10384 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0415 18:22:24.197084   10384 kubeadm.go:309] 
	I0415 18:22:24.197248   10384 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0415 18:22:24.197248   10384 kubeadm.go:309] 
	I0415 18:22:24.197248   10384 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0415 18:22:24.197432   10384 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0415 18:22:24.197611   10384 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0415 18:22:24.197611   10384 kubeadm.go:309] 
	I0415 18:22:24.197611   10384 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0415 18:22:24.197611   10384 kubeadm.go:309] 
	I0415 18:22:24.197611   10384 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0415 18:22:24.197611   10384 kubeadm.go:309] 
	I0415 18:22:24.197611   10384 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0415 18:22:24.197611   10384 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0415 18:22:24.197611   10384 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0415 18:22:24.198307   10384 kubeadm.go:309] 
	I0415 18:22:24.198307   10384 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0415 18:22:24.198307   10384 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0415 18:22:24.198307   10384 kubeadm.go:309] 
	I0415 18:22:24.198307   10384 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token huvy89.hhqbdqsl75p9l7b4 \
	I0415 18:22:24.198307   10384 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:64433043496e1f3289bbcb278fdd01bdcea4f93c3a13feee16fa4e3db23f59c2 \
	I0415 18:22:24.198307   10384 kubeadm.go:309] 	--control-plane 
	I0415 18:22:24.198307   10384 kubeadm.go:309] 
	I0415 18:22:24.198307   10384 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0415 18:22:24.198307   10384 kubeadm.go:309] 
	I0415 18:22:24.199302   10384 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token huvy89.hhqbdqsl75p9l7b4 \
	I0415 18:22:24.199302   10384 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:64433043496e1f3289bbcb278fdd01bdcea4f93c3a13feee16fa4e3db23f59c2 
	I0415 18:22:24.199302   10384 cni.go:84] Creating CNI manager for ""
	I0415 18:22:24.199302   10384 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 18:22:24.203263   10384 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0415 18:22:24.221247   10384 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0415 18:22:24.229824   10384 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0415 18:22:24.229824   10384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0415 18:22:24.323407   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0415 18:22:25.047319   10384 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0415 18:22:25.062350   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-653100 minikube.k8s.io/updated_at=2024_04_15T18_22_25_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c minikube.k8s.io/name=ha-653100 minikube.k8s.io/primary=true
	I0415 18:22:25.062350   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:25.070326   10384 ops.go:34] apiserver oom_adj: -16
	I0415 18:22:25.284655   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:25.790456   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:26.293504   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:26.795443   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:27.298654   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:27.786190   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:28.286860   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:28.788050   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:29.292845   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:29.794080   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:30.300169   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:30.788471   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:31.295339   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:31.798627   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:32.299958   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:32.791784   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:33.289567   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:33.791349   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:34.295367   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:34.804275   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:35.290745   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:35.794796   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:36.294136   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:36.799771   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:37.029000   10384 kubeadm.go:1107] duration metric: took 11.9815852s to wait for elevateKubeSystemPrivileges
	W0415 18:22:37.029063   10384 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0415 18:22:37.029138   10384 kubeadm.go:393] duration metric: took 30.249681s to StartCluster
	I0415 18:22:37.029138   10384 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:37.029339   10384 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 18:22:37.031101   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:37.032659   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0415 18:22:37.032659   10384 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0415 18:22:37.032732   10384 addons.go:69] Setting storage-provisioner=true in profile "ha-653100"
	I0415 18:22:37.032806   10384 addons.go:234] Setting addon storage-provisioner=true in "ha-653100"
	I0415 18:22:37.032841   10384 addons.go:69] Setting default-storageclass=true in profile "ha-653100"
	I0415 18:22:37.032891   10384 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-653100"
	I0415 18:22:37.032987   10384 host.go:66] Checking if "ha-653100" exists ...
	I0415 18:22:37.032579   10384 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.19.63.147 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 18:22:37.033266   10384 start.go:240] waiting for startup goroutines ...
	I0415 18:22:37.033382   10384 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:22:37.033632   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:37.034694   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:37.253650   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0415 18:22:37.698002   10384 start.go:946] {"host.minikube.internal": 172.19.48.1} host record injected into CoreDNS's ConfigMap
	I0415 18:22:39.456374   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:39.456374   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:39.459088   10384 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 18:22:39.457089   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:39.461772   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:39.461772   10384 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 18:22:39.461772   10384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0415 18:22:39.462029   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:39.463267   10384 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 18:22:39.464063   10384 kapi.go:59] client config for ha-653100: &rest.Config{Host:"https://172.19.63.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-653100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-653100\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f71600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0415 18:22:39.466136   10384 cert_rotation.go:137] Starting client certificate rotation controller
	I0415 18:22:39.466794   10384 addons.go:234] Setting addon default-storageclass=true in "ha-653100"
	I0415 18:22:39.466794   10384 host.go:66] Checking if "ha-653100" exists ...
	I0415 18:22:39.466794   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:41.909905   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:41.909905   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:41.909905   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:22:41.955061   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:41.955061   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:41.955625   10384 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0415 18:22:41.955711   10384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0415 18:22:41.955711   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:44.377012   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:44.377012   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:44.377984   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:22:44.805425   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:22:44.805425   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:44.805425   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:22:44.974779   10384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 18:22:47.155103   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:22:47.155103   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:47.156316   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:22:47.304965   10384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0415 18:22:47.473026   10384 round_trippers.go:463] GET https://172.19.63.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0415 18:22:47.473026   10384 round_trippers.go:469] Request Headers:
	I0415 18:22:47.473026   10384 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:22:47.473026   10384 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:22:47.488496   10384 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0415 18:22:47.490408   10384 round_trippers.go:463] PUT https://172.19.63.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0415 18:22:47.490526   10384 round_trippers.go:469] Request Headers:
	I0415 18:22:47.490526   10384 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:22:47.490526   10384 round_trippers.go:473]     Content-Type: application/json
	I0415 18:22:47.490526   10384 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:22:47.494518   10384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:22:47.498597   10384 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0415 18:22:47.501457   10384 addons.go:505] duration metric: took 10.468136s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0415 18:22:47.501457   10384 start.go:245] waiting for cluster config update ...
	I0415 18:22:47.501457   10384 start.go:254] writing updated cluster config ...
	I0415 18:22:47.503984   10384 out.go:177] 
	I0415 18:22:47.513974   10384 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:22:47.513974   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:22:47.518979   10384 out.go:177] * Starting "ha-653100-m02" control-plane node in "ha-653100" cluster
	I0415 18:22:47.524981   10384 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:22:47.524981   10384 cache.go:56] Caching tarball of preloaded images
	I0415 18:22:47.526030   10384 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 18:22:47.526235   10384 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 18:22:47.526401   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:22:47.528481   10384 start.go:360] acquireMachinesLock for ha-653100-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 18:22:47.528921   10384 start.go:364] duration metric: took 121.6µs to acquireMachinesLock for "ha-653100-m02"
	I0415 18:22:47.529077   10384 start.go:93] Provisioning new machine with config: &{Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-653100 Namespace:default APIServerHAVIP:172.19.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.63.147 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 18:22:47.529280   10384 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0415 18:22:47.540485   10384 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 18:22:47.541556   10384 start.go:159] libmachine.API.Create for "ha-653100" (driver="hyperv")
	I0415 18:22:47.541556   10384 client.go:168] LocalClient.Create starting
	I0415 18:22:47.542079   10384 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0415 18:22:47.542415   10384 main.go:141] libmachine: Decoding PEM data...
	I0415 18:22:47.542415   10384 main.go:141] libmachine: Parsing certificate...
	I0415 18:22:47.542700   10384 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0415 18:22:47.542700   10384 main.go:141] libmachine: Decoding PEM data...
	I0415 18:22:47.542700   10384 main.go:141] libmachine: Parsing certificate...
	I0415 18:22:47.542700   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0415 18:22:49.574978   10384 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0415 18:22:49.574978   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:49.576110   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0415 18:22:51.479178   10384 main.go:141] libmachine: [stdout =====>] : False
	
	I0415 18:22:51.479178   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:51.479600   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 18:22:53.065829   10384 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 18:22:53.066593   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:53.066593   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 18:22:57.052062   10384 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 18:22:57.052234   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:57.055252   10384 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0415 18:22:57.583068   10384 main.go:141] libmachine: Creating SSH key...
	I0415 18:22:57.931279   10384 main.go:141] libmachine: Creating VM...
	I0415 18:22:57.931279   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 18:23:01.081349   10384 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 18:23:01.081349   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:01.082298   10384 main.go:141] libmachine: Using switch "Default Switch"
	I0415 18:23:01.082375   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 18:23:02.972464   10384 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 18:23:02.972464   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:02.972464   10384 main.go:141] libmachine: Creating VHD
	I0415 18:23:02.973018   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0415 18:23:06.989219   10384 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : DEE7E17F-5E93-468C-BA30-08390D1CA178
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0415 18:23:06.989219   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:06.989219   10384 main.go:141] libmachine: Writing magic tar header
	I0415 18:23:06.989219   10384 main.go:141] libmachine: Writing SSH key tar header
	I0415 18:23:06.990286   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0415 18:23:10.344718   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:10.344718   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:10.344872   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\disk.vhd' -SizeBytes 20000MB
	I0415 18:23:13.048066   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:13.048981   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:13.049137   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-653100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0415 18:23:17.000979   10384 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-653100-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0415 18:23:17.000979   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:17.001667   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-653100-m02 -DynamicMemoryEnabled $false
	I0415 18:23:19.529184   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:19.529184   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:19.529184   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-653100-m02 -Count 2
	I0415 18:23:21.929952   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:21.929952   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:21.930071   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-653100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\boot2docker.iso'
	I0415 18:23:24.786919   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:24.786919   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:24.786919   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-653100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\disk.vhd'
	I0415 18:23:27.665809   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:27.665809   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:27.665809   10384 main.go:141] libmachine: Starting VM...
	I0415 18:23:27.666001   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-653100-m02
	I0415 18:23:31.102209   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:31.103144   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:31.103144   10384 main.go:141] libmachine: Waiting for host to start...
	I0415 18:23:31.103144   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:33.569054   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:33.569054   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:33.569054   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:23:36.303048   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:36.303048   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:37.312865   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:39.749364   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:39.749620   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:39.749702   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:23:42.512466   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:42.512842   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:43.518477   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:45.904872   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:45.904872   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:45.905633   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:23:48.594507   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:48.594669   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:49.606615   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:51.980362   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:51.981179   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:51.981179   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:23:54.737668   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:54.738407   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:55.749257   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:58.134602   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:58.134602   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:58.135468   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:00.918915   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:00.919329   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:00.919408   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:03.202618   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:03.202618   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:03.202618   10384 machine.go:94] provisionDockerMachine start ...
	I0415 18:24:03.202618   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:05.548511   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:05.548511   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:05.549191   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:08.289644   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:08.290567   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:08.299809   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:08.300714   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:08.300714   10384 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 18:24:08.446422   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0415 18:24:08.446972   10384 buildroot.go:166] provisioning hostname "ha-653100-m02"
	I0415 18:24:08.446972   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:10.773426   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:10.773426   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:10.773426   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:13.530172   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:13.530172   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:13.536850   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:13.537708   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:13.537708   10384 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-653100-m02 && echo "ha-653100-m02" | sudo tee /etc/hostname
	I0415 18:24:13.707716   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-653100-m02
	
	I0415 18:24:13.707716   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:16.005330   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:16.005330   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:16.005330   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:18.762850   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:18.762850   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:18.770232   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:18.770901   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:18.770901   10384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-653100-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-653100-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-653100-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 18:24:18.936615   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 18:24:18.936615   10384 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0415 18:24:18.937152   10384 buildroot.go:174] setting up certificates
	I0415 18:24:18.937207   10384 provision.go:84] configureAuth start
	I0415 18:24:18.937207   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:21.299996   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:21.299996   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:21.300197   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:24.133316   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:24.134096   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:24.134153   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:26.489254   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:26.489254   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:26.489549   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:29.236160   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:29.236160   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:29.236234   10384 provision.go:143] copyHostCerts
	I0415 18:24:29.236417   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0415 18:24:29.236539   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0415 18:24:29.236539   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0415 18:24:29.237340   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0415 18:24:29.238595   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0415 18:24:29.238972   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0415 18:24:29.238972   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0415 18:24:29.239408   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0415 18:24:29.240639   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0415 18:24:29.240835   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0415 18:24:29.240835   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0415 18:24:29.241419   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0415 18:24:29.242408   10384 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-653100-m02 san=[127.0.0.1 172.19.63.104 ha-653100-m02 localhost minikube]
	I0415 18:24:29.398831   10384 provision.go:177] copyRemoteCerts
	I0415 18:24:29.412927   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 18:24:29.412927   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:31.723514   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:31.723616   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:31.723616   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:34.496654   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:34.496654   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:34.497398   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:24:34.615182   10384 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.2022138s)
	I0415 18:24:34.615182   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0415 18:24:34.615849   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 18:24:34.668445   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0415 18:24:34.668971   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0415 18:24:34.720499   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0415 18:24:34.721156   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0415 18:24:34.770381   10384 provision.go:87] duration metric: took 15.8330476s to configureAuth
	I0415 18:24:34.770381   10384 buildroot.go:189] setting minikube options for container-runtime
	I0415 18:24:34.770381   10384 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:24:34.770381   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:37.079755   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:37.080689   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:37.080689   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:39.859679   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:39.859754   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:39.866117   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:39.866820   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:39.866820   10384 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 18:24:40.015731   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0415 18:24:40.015731   10384 buildroot.go:70] root file system type: tmpfs
	I0415 18:24:40.015731   10384 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 18:24:40.015731   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:42.404944   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:42.405443   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:42.405443   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:45.210326   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:45.210813   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:45.216335   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:45.216939   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:45.216939   10384 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.63.147"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 18:24:45.394927   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.63.147
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 18:24:45.395706   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:47.711900   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:47.712499   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:47.712595   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:50.491344   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:50.491344   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:50.502173   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:50.502173   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:50.502173   10384 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 18:24:52.836243   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0415 18:24:52.836243   10384 machine.go:97] duration metric: took 49.6332282s to provisionDockerMachine
	I0415 18:24:52.836243   10384 client.go:171] duration metric: took 2m5.2936865s to LocalClient.Create
	I0415 18:24:52.836243   10384 start.go:167] duration metric: took 2m5.2936865s to libmachine.API.Create "ha-653100"
	I0415 18:24:52.836243   10384 start.go:293] postStartSetup for "ha-653100-m02" (driver="hyperv")
	I0415 18:24:52.836243   10384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 18:24:52.850899   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 18:24:52.851896   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:55.199036   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:55.199775   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:55.199775   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:58.012510   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:58.013353   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:58.013914   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:24:58.132196   10384 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.2802026s)
	I0415 18:24:58.147452   10384 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 18:24:58.154532   10384 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 18:24:58.154532   10384 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0415 18:24:58.155095   10384 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0415 18:24:58.156186   10384 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> 112722.pem in /etc/ssl/certs
	I0415 18:24:58.156186   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /etc/ssl/certs/112722.pem
	I0415 18:24:58.170256   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 18:24:58.189873   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /etc/ssl/certs/112722.pem (1708 bytes)
	I0415 18:24:58.243032   10384 start.go:296] duration metric: took 5.4067454s for postStartSetup
	I0415 18:24:58.246437   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:00.550399   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:00.550399   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:00.550894   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:03.289044   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:03.289044   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:03.289835   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:25:03.292186   10384 start.go:128] duration metric: took 2m15.7618211s to createHost
	I0415 18:25:03.292186   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:05.668753   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:05.668753   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:05.668966   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:08.439658   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:08.439658   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:08.447000   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:25:08.447864   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:25:08.447864   10384 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0415 18:25:08.589758   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713205508.597287833
	
	I0415 18:25:08.589758   10384 fix.go:216] guest clock: 1713205508.597287833
	I0415 18:25:08.589758   10384 fix.go:229] Guest: 2024-04-15 18:25:08.597287833 +0000 UTC Remote: 2024-04-15 18:25:03.2921862 +0000 UTC m=+360.055147501 (delta=5.305101633s)
	I0415 18:25:08.590328   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:10.915118   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:10.915118   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:10.916067   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:13.650013   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:13.650612   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:13.656497   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:25:13.657104   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:25:13.657182   10384 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713205508
	I0415 18:25:13.813133   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 15 18:25:08 UTC 2024
	
	I0415 18:25:13.813133   10384 fix.go:236] clock set: Mon Apr 15 18:25:08 UTC 2024
	 (err=<nil>)
	I0415 18:25:13.813133   10384 start.go:83] releasing machines lock for "ha-653100-m02", held for 2m26.2829576s
	I0415 18:25:13.813133   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:16.141194   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:16.141380   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:16.141380   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:18.957495   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:18.957495   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:18.960756   10384 out.go:177] * Found network options:
	I0415 18:25:18.964431   10384 out.go:177]   - NO_PROXY=172.19.63.147
	W0415 18:25:18.966627   10384 proxy.go:119] fail to check proxy env: Error ip not in block
	I0415 18:25:18.969406   10384 out.go:177]   - NO_PROXY=172.19.63.147
	W0415 18:25:18.972226   10384 proxy.go:119] fail to check proxy env: Error ip not in block
	W0415 18:25:18.975235   10384 proxy.go:119] fail to check proxy env: Error ip not in block
	I0415 18:25:18.977840   10384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 18:25:18.977840   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:18.990793   10384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0415 18:25:18.990793   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:21.355429   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:21.355429   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:21.355429   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:21.374654   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:21.374654   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:21.374654   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:24.278775   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:24.279572   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:24.280405   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:25:24.306668   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:24.306668   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:24.308123   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:25:24.386474   10384 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.3956377s)
	W0415 18:25:24.386474   10384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 18:25:24.404866   10384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 18:25:24.481327   10384 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 18:25:24.481327   10384 start.go:494] detecting cgroup driver to use...
	I0415 18:25:24.481327   10384 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.5034427s)
	I0415 18:25:24.481327   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:25:24.536359   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0415 18:25:24.572347   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 18:25:24.593352   10384 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 18:25:24.610729   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 18:25:24.650456   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:25:24.693297   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 18:25:24.730594   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:25:24.771078   10384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 18:25:24.812358   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 18:25:24.854948   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 18:25:24.893956   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 18:25:24.934484   10384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 18:25:24.974849   10384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 18:25:25.012928   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:25:25.269094   10384 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 18:25:25.319374   10384 start.go:494] detecting cgroup driver to use...
	I0415 18:25:25.334757   10384 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0415 18:25:25.382030   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:25:25.422509   10384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 18:25:25.496212   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:25:25.539556   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:25:25.586254   10384 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0415 18:25:25.665807   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:25:25.697619   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:25:25.754485   10384 ssh_runner.go:195] Run: which cri-dockerd
	I0415 18:25:25.776463   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0415 18:25:25.798310   10384 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0415 18:25:25.849027   10384 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0415 18:25:26.103040   10384 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0415 18:25:26.311089   10384 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0415 18:25:26.311089   10384 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0415 18:25:26.371946   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:25:26.596000   10384 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 18:26:27.765978   10384 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1694886s)
	I0415 18:26:27.781002   10384 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0415 18:26:27.817233   10384 out.go:177] 
	W0415 18:26:27.820189   10384 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 15 18:24:51 ha-653100-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 15 18:24:51 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:51.175281888Z" level=info msg="Starting up"
	Apr 15 18:24:51 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:51.176817321Z" level=info msg="containerd not running, starting managed containerd"
	Apr 15 18:24:51 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:51.181288215Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.216362257Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243075421Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243180523Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243245725Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243263625Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243358927Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243375528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243544331Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243714535Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243739035Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243751135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243859138Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.244478651Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.247680919Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.247787921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248037026Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248177629Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248295531Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248444935Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248541437Z" level=info msg="metadata content store policy set" policy=shared
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279315587Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279443690Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279651894Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279764797Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279791497Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.280197206Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.280884220Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281341330Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281485733Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281516134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281561035Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281615936Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281641736Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281663737Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281686937Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281709538Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281727638Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281747238Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281777139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281801640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281822540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281844040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281864141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281895342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282030744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282122446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282152747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282178548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282205748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282227849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282250949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282279750Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282310750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282329151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282347551Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282407752Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282432753Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282447653Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282465554Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282584456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282620757Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282637557Z" level=info msg="NRI interface is disabled by configuration."
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.283743481Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.283842283Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.283903984Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.284335093Z" level=info msg="containerd successfully booted in 0.071116s"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.254240790Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.289190582Z" level=info msg="Loading containers: start."
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.609124512Z" level=info msg="Loading containers: done."
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.636265777Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.636518080Z" level=info msg="Daemon has completed initialization"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.840822625Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.841084828Z" level=info msg="API listen on [::]:2376"
	Apr 15 18:24:52 ha-653100-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 15 18:25:26 ha-653100-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.632253775Z" level=info msg="Processing signal 'terminated'"
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.634242462Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.635132157Z" level=info msg="Daemon shutdown complete"
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.635380455Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.635547254Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 15 18:25:27 ha-653100-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 15 18:25:27 ha-653100-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 15 18:25:27 ha-653100-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 15 18:25:27 ha-653100-m02 dockerd[1016]: time="2024-04-15T18:25:27.736568730Z" level=info msg="Starting up"
	Apr 15 18:26:27 ha-653100-m02 dockerd[1016]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 15 18:26:27 ha-653100-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 15 18:26:27 ha-653100-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 15 18:26:27 ha-653100-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0415 18:26:27.820189   10384 out.go:239] * 
	W0415 18:26:27.821891   10384 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 18:26:27.843940   10384 out.go:177] 
	
	
	==> Docker <==
	Apr 15 18:27:06 ha-653100 dockerd[1327]: time="2024-04-15T18:27:06.407457500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:27:06 ha-653100 cri-dockerd[1226]: time="2024-04-15T18:27:06Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4ba88ccaba1a512a72acfefb5864241c5bdcf769724a94eb9e19d7eb09298ffa/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 15 18:27:07 ha-653100 cri-dockerd[1226]: time="2024-04-15T18:27:07Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 15 18:27:08 ha-653100 dockerd[1327]: time="2024-04-15T18:27:08.008356931Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 15 18:27:08 ha-653100 dockerd[1327]: time="2024-04-15T18:27:08.012080171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 18:27:08 ha-653100 dockerd[1327]: time="2024-04-15T18:27:08.012126471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:27:08 ha-653100 dockerd[1327]: time="2024-04-15T18:27:08.012457375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 18:39:29 ha-653100 dockerd[1321]: 2024/04/15 18:39:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:39:29 ha-653100 dockerd[1321]: 2024/04/15 18:39:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:39:29 ha-653100 dockerd[1321]: 2024/04/15 18:39:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:39:29 ha-653100 dockerd[1321]: 2024/04/15 18:39:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:39:29 ha-653100 dockerd[1321]: 2024/04/15 18:39:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:39:29 ha-653100 dockerd[1321]: 2024/04/15 18:39:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:39:29 ha-653100 dockerd[1321]: 2024/04/15 18:39:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:39:29 ha-653100 dockerd[1321]: 2024/04/15 18:39:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:39:30 ha-653100 dockerd[1321]: 2024/04/15 18:39:30 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:40:18 ha-653100 dockerd[1321]: 2024/04/15 18:40:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:40:18 ha-653100 dockerd[1321]: 2024/04/15 18:40:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:40:18 ha-653100 dockerd[1321]: 2024/04/15 18:40:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:40:18 ha-653100 dockerd[1321]: 2024/04/15 18:40:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:40:18 ha-653100 dockerd[1321]: 2024/04/15 18:40:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:40:18 ha-653100 dockerd[1321]: 2024/04/15 18:40:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:40:19 ha-653100 dockerd[1321]: 2024/04/15 18:40:19 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:40:19 ha-653100 dockerd[1321]: 2024/04/15 18:40:19 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:40:19 ha-653100 dockerd[1321]: 2024/04/15 18:40:19 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3810def19c30b       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   17 minutes ago      Running             busybox                   0                   4ba88ccaba1a5       busybox-7fdf7869d9-5w5x4
	58d38dcc399d7       cbb01a7bd410d                                                                                         22 minutes ago      Running             coredns                   0                   66b040582e9fe       coredns-76f75df574-hz5n2
	7f2e95849717e       6e38f40d628db                                                                                         22 minutes ago      Running             storage-provisioner       0                   41946a72e3913       storage-provisioner
	79df4cc493ccd       cbb01a7bd410d                                                                                         22 minutes ago      Running             coredns                   0                   c2bc3be2dada4       coredns-76f75df574-sw766
	8533539a42fc8       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              22 minutes ago      Running             kindnet-cni               0                   840d4c720c681       kindnet-k8jt8
	ece5eb28b20be       a1d263b5dc5b0                                                                                         22 minutes ago      Running             kube-proxy                0                   590527a253a30       kube-proxy-dgh6m
	0cf5b602fc0c4       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     22 minutes ago      Running             kube-vip                  0                   71c70584ee9c6       kube-vip-ha-653100
	a0697c56404b8       6052a25da3f97                                                                                         22 minutes ago      Running             kube-controller-manager   0                   5c4190df9fb18       kube-controller-manager-ha-653100
	d68da55f0f382       8c390d98f50c0                                                                                         22 minutes ago      Running             kube-scheduler            0                   92e96b6d41bb2       kube-scheduler-ha-653100
	b7958fc0d30b8       39f995c9f1996                                                                                         22 minutes ago      Running             kube-apiserver            0                   a7b3e44514ced       kube-apiserver-ha-653100
	a0fa6c17de399       3861cfcd7c04c                                                                                         22 minutes ago      Running             etcd                      0                   65fe5df3a93dd       etcd-ha-653100
	
	
	==> coredns [58d38dcc399d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e2b9de1191510a72356755223f06623b152d8cdd72ea393cca47fb3d34a5414574050e77e521fd64fc84b7e18fcd0fb5ead79ecf0a5a8be221bd0ffeb8c0080c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45845 - 8967 "HINFO IN 8354542665525626293.2689365418710486320. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.045113649s
	[INFO] 10.244.0.4:51221 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.078581836s
	[INFO] 10.244.0.4:47875 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.52769764s
	[INFO] 10.244.0.4:52717 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000306601s
	[INFO] 10.244.0.4:39163 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.050987688s
	[INFO] 10.244.0.4:37816 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001543s
	[INFO] 10.244.0.4:60144 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.014447825s
	[INFO] 10.244.0.4:55552 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000204001s
	[INFO] 10.244.0.4:36177 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153901s
	[INFO] 10.244.0.4:46410 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000283001s
	[INFO] 10.244.0.4:57190 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168701s
	[INFO] 10.244.0.4:47185 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002385s
	[INFO] 10.244.0.4:34139 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001337s
	[INFO] 10.244.0.4:51029 - 5 "PTR IN 1.48.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000098701s
	
	
	==> coredns [79df4cc493cc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e2b9de1191510a72356755223f06623b152d8cdd72ea393cca47fb3d34a5414574050e77e521fd64fc84b7e18fcd0fb5ead79ecf0a5a8be221bd0ffeb8c0080c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57426 - 12156 "HINFO IN 2507889984284766848.6813386495577107890. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.33687907s
	[INFO] 10.244.0.4:40226 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000337201s
	[INFO] 10.244.0.4:56672 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.049146285s
	[INFO] 10.244.0.4:54337 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001723s
	[INFO] 10.244.0.4:58976 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0002015s
	[INFO] 10.244.0.4:41149 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00024s
	[INFO] 10.244.0.4:37438 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000310601s
	[INFO] 10.244.0.4:54099 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0002466s
	
	
	==> describe nodes <==
	Name:               ha-653100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-653100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c
	                    minikube.k8s.io/name=ha-653100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_15T18_22_25_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 18:22:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-653100
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Apr 2024 18:45:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Apr 2024 18:42:51 +0000   Mon, 15 Apr 2024 18:22:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Apr 2024 18:42:51 +0000   Mon, 15 Apr 2024 18:22:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Apr 2024 18:42:51 +0000   Mon, 15 Apr 2024 18:22:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Apr 2024 18:42:51 +0000   Mon, 15 Apr 2024 18:22:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.63.147
	  Hostname:    ha-653100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 d7ba8367096d4bf9b0e4541361a84287
	  System UUID:                64d5f641-1f2f-ce46-8918-a08d661c1258
	  Boot ID:                    994d41df-0ae9-4f39-ad28-f5e794182c63
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-5w5x4             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-76f75df574-hz5n2             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 coredns-76f75df574-sw766             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-ha-653100                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kindnet-k8jt8                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      22m
	  kube-system                 kube-apiserver-ha-653100             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-ha-653100    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-dgh6m                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-ha-653100             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-vip-ha-653100                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 22m   kube-proxy       
	  Normal  Starting                 22m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m   kubelet          Node ha-653100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m   kubelet          Node ha-653100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m   kubelet          Node ha-653100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m   node-controller  Node ha-653100 event: Registered Node ha-653100 in Controller
	  Normal  NodeReady                22m   kubelet          Node ha-653100 status is now: NodeReady
	
	
	Name:               ha-653100-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-653100-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c
	                    minikube.k8s.io/name=ha-653100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_15T18_43_42_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 18:43:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-653100-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Apr 2024 18:45:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Apr 2024 18:44:12 +0000   Mon, 15 Apr 2024 18:43:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Apr 2024 18:44:12 +0000   Mon, 15 Apr 2024 18:43:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Apr 2024 18:44:12 +0000   Mon, 15 Apr 2024 18:43:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Apr 2024 18:44:12 +0000   Mon, 15 Apr 2024 18:44:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.51.108
	  Hostname:    ha-653100-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 ceb376f540fe4419a1393b81dd4c70ec
	  System UUID:                316f69f2-57b1-1a4d-9808-3339f6c9e586
	  Boot ID:                    231d6308-8f63-4640-95e7-8ba95af6dfa1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-rtbf9       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      84s
	  kube-system                 kube-proxy-kvnct    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 71s                kube-proxy       
	  Normal  NodeHasSufficientMemory  84s (x2 over 84s)  kubelet          Node ha-653100-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    84s (x2 over 84s)  kubelet          Node ha-653100-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     84s (x2 over 84s)  kubelet          Node ha-653100-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  84s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           83s                node-controller  Node ha-653100-m03 event: Registered Node ha-653100-m03 in Controller
	  Normal  NodeReady                63s                kubelet          Node ha-653100-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.084698] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.376265] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr15 18:21] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.209937] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[ +33.615481] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.104388] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.615924] systemd-fstab-generator[983]: Ignoring "noauto" option for root device
	[  +0.216331] systemd-fstab-generator[995]: Ignoring "noauto" option for root device
	[  +0.260985] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[  +2.876807] systemd-fstab-generator[1179]: Ignoring "noauto" option for root device
	[  +0.212935] systemd-fstab-generator[1191]: Ignoring "noauto" option for root device
	[  +0.227831] systemd-fstab-generator[1203]: Ignoring "noauto" option for root device
	[  +0.311128] systemd-fstab-generator[1218]: Ignoring "noauto" option for root device
	[Apr15 18:22] systemd-fstab-generator[1312]: Ignoring "noauto" option for root device
	[  +0.114802] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.164512] systemd-fstab-generator[1517]: Ignoring "noauto" option for root device
	[  +7.677617] systemd-fstab-generator[1722]: Ignoring "noauto" option for root device
	[  +0.108322] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.774902] kauditd_printk_skb: 67 callbacks suppressed
	[  +5.244487] systemd-fstab-generator[2220]: Ignoring "noauto" option for root device
	[ +14.155639] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.279744] kauditd_printk_skb: 29 callbacks suppressed
	[Apr15 18:27] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [a0fa6c17de39] <==
	{"level":"info","ts":"2024-04-15T18:41:00.22731Z","caller":"traceutil/trace.go:171","msg":"trace[76664393] transaction","detail":"{read_only:false; response_revision:2419; number_of_response:1; }","duration":"105.483959ms","start":"2024-04-15T18:41:00.121805Z","end":"2024-04-15T18:41:00.227289Z","steps":["trace[76664393] 'process raft request'  (duration: 105.285059ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:41:01.418769Z","caller":"traceutil/trace.go:171","msg":"trace[8276609] transaction","detail":"{read_only:false; response_revision:2421; number_of_response:1; }","duration":"106.31566ms","start":"2024-04-15T18:41:01.312434Z","end":"2024-04-15T18:41:01.41875Z","steps":["trace[8276609] 'process raft request'  (duration: 106.038259ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:42:17.431757Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2021}
	{"level":"info","ts":"2024-04-15T18:42:17.443797Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2021,"took":"11.390416ms","hash":1421491769,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1802240,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-04-15T18:42:17.443837Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1421491769,"revision":2021,"compact-revision":1485}
	{"level":"info","ts":"2024-04-15T18:43:33.518295Z","caller":"traceutil/trace.go:171","msg":"trace[443010453] transaction","detail":"{read_only:false; response_revision:2695; number_of_response:1; }","duration":"266.440557ms","start":"2024-04-15T18:43:33.251827Z","end":"2024-04-15T18:43:33.518267Z","steps":["trace[443010453] 'process raft request'  (duration: 266.058657ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:34.010261Z","caller":"traceutil/trace.go:171","msg":"trace[907999705] linearizableReadLoop","detail":"{readStateIndex:2969; appliedIndex:2968; }","duration":"127.140571ms","start":"2024-04-15T18:43:33.882928Z","end":"2024-04-15T18:43:34.010069Z","steps":["trace[907999705] 'read index received'  (duration: 126.931871ms)","trace[907999705] 'applied index is now lower than readState.Index'  (duration: 208.1µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-15T18:43:34.010558Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.698771ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-15T18:43:34.01062Z","caller":"traceutil/trace.go:171","msg":"trace[1177634589] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2696; }","duration":"127.798872ms","start":"2024-04-15T18:43:33.882811Z","end":"2024-04-15T18:43:34.01061Z","steps":["trace[1177634589] 'agreement among raft nodes before linearized reading'  (duration: 127.591271ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:34.010373Z","caller":"traceutil/trace.go:171","msg":"trace[320563100] transaction","detail":"{read_only:false; response_revision:2696; number_of_response:1; }","duration":"232.738612ms","start":"2024-04-15T18:43:33.777617Z","end":"2024-04-15T18:43:34.010356Z","steps":["trace[320563100] 'process raft request'  (duration: 232.232111ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:34.712206Z","caller":"traceutil/trace.go:171","msg":"trace[711121958] transaction","detail":"{read_only:false; response_revision:2697; number_of_response:1; }","duration":"181.877144ms","start":"2024-04-15T18:43:34.530256Z","end":"2024-04-15T18:43:34.712133Z","steps":["trace[711121958] 'process raft request'  (duration: 181.582843ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:46.38138Z","caller":"traceutil/trace.go:171","msg":"trace[31759011] transaction","detail":"{read_only:false; response_revision:2751; number_of_response:1; }","duration":"240.29982ms","start":"2024-04-15T18:43:46.141059Z","end":"2024-04-15T18:43:46.381359Z","steps":["trace[31759011] 'process raft request'  (duration: 230.957808ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:52.283476Z","caller":"traceutil/trace.go:171","msg":"trace[1889997840] linearizableReadLoop","detail":"{readStateIndex:3044; appliedIndex:3043; }","duration":"110.447446ms","start":"2024-04-15T18:43:52.17301Z","end":"2024-04-15T18:43:52.283458Z","steps":["trace[1889997840] 'read index received'  (duration: 110.306146ms)","trace[1889997840] 'applied index is now lower than readState.Index'  (duration: 140.7µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-15T18:43:52.283605Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.572946ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/172.19.63.147\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-04-15T18:43:52.283637Z","caller":"traceutil/trace.go:171","msg":"trace[10951613] range","detail":"{range_begin:/registry/masterleases/172.19.63.147; range_end:; response_count:1; response_revision:2766; }","duration":"110.637847ms","start":"2024-04-15T18:43:52.17299Z","end":"2024-04-15T18:43:52.283628Z","steps":["trace[10951613] 'agreement among raft nodes before linearized reading'  (duration: 110.561546ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:52.283903Z","caller":"traceutil/trace.go:171","msg":"trace[1508396561] transaction","detail":"{read_only:false; response_revision:2766; number_of_response:1; }","duration":"114.807253ms","start":"2024-04-15T18:43:52.169084Z","end":"2024-04-15T18:43:52.283892Z","steps":["trace[1508396561] 'process raft request'  (duration: 114.280152ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T18:43:52.666427Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.266757ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14279945624074152814 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:462c8ee2fed56b6d>","response":"size:41"}
	{"level":"info","ts":"2024-04-15T18:43:52.667005Z","caller":"traceutil/trace.go:171","msg":"trace[1721457016] linearizableReadLoop","detail":"{readStateIndex:3045; appliedIndex:3044; }","duration":"237.167715ms","start":"2024-04-15T18:43:52.429394Z","end":"2024-04-15T18:43:52.666562Z","steps":["trace[1721457016] 'read index received'  (duration: 43.566658ms)","trace[1721457016] 'applied index is now lower than readState.Index'  (duration: 193.598957ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-15T18:43:52.667407Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T18:43:52.285771Z","time spent":"381.633207ms","remote":"127.0.0.1:45166","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2024-04-15T18:43:52.66813Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"230.156306ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1110"}
	{"level":"warn","ts":"2024-04-15T18:43:52.66875Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"239.352418ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-653100-m03\" ","response":"range_response_count:1 size:3120"}
	{"level":"info","ts":"2024-04-15T18:43:52.668806Z","caller":"traceutil/trace.go:171","msg":"trace[2016319950] range","detail":"{range_begin:/registry/minions/ha-653100-m03; range_end:; response_count:1; response_revision:2766; }","duration":"239.433618ms","start":"2024-04-15T18:43:52.429363Z","end":"2024-04-15T18:43:52.668797Z","steps":["trace[2016319950] 'agreement among raft nodes before linearized reading'  (duration: 239.350018ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:52.668276Z","caller":"traceutil/trace.go:171","msg":"trace[416735202] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2766; }","duration":"230.317306ms","start":"2024-04-15T18:43:52.437947Z","end":"2024-04-15T18:43:52.668265Z","steps":["trace[416735202] 'agreement among raft nodes before linearized reading'  (duration: 230.123406ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:52.788795Z","caller":"traceutil/trace.go:171","msg":"trace[396711083] transaction","detail":"{read_only:false; response_revision:2768; number_of_response:1; }","duration":"109.505445ms","start":"2024-04-15T18:43:52.679272Z","end":"2024-04-15T18:43:52.788777Z","steps":["trace[396711083] 'process raft request'  (duration: 102.216136ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:57.928969Z","caller":"traceutil/trace.go:171","msg":"trace[2037072140] transaction","detail":"{read_only:false; response_revision:2782; number_of_response:1; }","duration":"141.624188ms","start":"2024-04-15T18:43:57.787327Z","end":"2024-04-15T18:43:57.928951Z","steps":["trace[2037072140] 'process raft request'  (duration: 141.090687ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:45:05 up 25 min,  0 users,  load average: 0.42, 0.35, 0.29
	Linux ha-653100 5.10.207 #1 SMP Mon Apr 15 15:01:07 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8533539a42fc] <==
	I0415 18:43:57.607897       1 main.go:250] Node ha-653100-m03 has CIDR [10.244.1.0/24] 
	I0415 18:44:07.615579       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:44:07.615664       1 main.go:227] handling current node
	I0415 18:44:07.615677       1 main.go:223] Handling node with IPs: map[172.19.51.108:{}]
	I0415 18:44:07.615684       1 main.go:250] Node ha-653100-m03 has CIDR [10.244.1.0/24] 
	I0415 18:44:17.625567       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:44:17.625690       1 main.go:227] handling current node
	I0415 18:44:17.625707       1 main.go:223] Handling node with IPs: map[172.19.51.108:{}]
	I0415 18:44:17.626063       1 main.go:250] Node ha-653100-m03 has CIDR [10.244.1.0/24] 
	I0415 18:44:27.638780       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:44:27.638908       1 main.go:227] handling current node
	I0415 18:44:27.638924       1 main.go:223] Handling node with IPs: map[172.19.51.108:{}]
	I0415 18:44:27.638933       1 main.go:250] Node ha-653100-m03 has CIDR [10.244.1.0/24] 
	I0415 18:44:37.646881       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:44:37.647017       1 main.go:227] handling current node
	I0415 18:44:37.647034       1 main.go:223] Handling node with IPs: map[172.19.51.108:{}]
	I0415 18:44:37.647043       1 main.go:250] Node ha-653100-m03 has CIDR [10.244.1.0/24] 
	I0415 18:44:47.653284       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:44:47.653386       1 main.go:227] handling current node
	I0415 18:44:47.653401       1 main.go:223] Handling node with IPs: map[172.19.51.108:{}]
	I0415 18:44:47.653410       1 main.go:250] Node ha-653100-m03 has CIDR [10.244.1.0/24] 
	I0415 18:44:57.667779       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:44:57.667892       1 main.go:227] handling current node
	I0415 18:44:57.667909       1 main.go:223] Handling node with IPs: map[172.19.51.108:{}]
	I0415 18:44:57.667918       1 main.go:250] Node ha-653100-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [b7958fc0d30b] <==
	I0415 18:22:19.472339       1 autoregister_controller.go:141] Starting autoregister controller
	I0415 18:22:19.472452       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0415 18:22:19.472462       1 cache.go:39] Caches are synced for autoregister controller
	I0415 18:22:19.498049       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0415 18:22:19.510348       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0415 18:22:20.354035       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0415 18:22:20.363724       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0415 18:22:20.363838       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0415 18:22:21.763949       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0415 18:22:21.866542       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0415 18:22:22.100224       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0415 18:22:22.118571       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.63.147]
	I0415 18:22:22.120605       1 controller.go:624] quota admission added evaluator for: endpoints
	I0415 18:22:22.130952       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0415 18:22:22.385516       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0415 18:22:24.016138       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0415 18:22:24.048032       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0415 18:22:24.081226       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0415 18:22:36.868875       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0415 18:22:36.898745       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0415 18:43:52.764570       1 trace.go:236] Trace[705971869]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.19.63.147,type:*v1.Endpoints,resource:apiServerIPInfo (15-Apr-2024 18:43:52.172) (total time: 592ms):
	Trace[705971869]: ---"initial value restored" 112ms (18:43:52.284)
	Trace[705971869]: ---"Transaction prepared" 384ms (18:43:52.669)
	Trace[705971869]: ---"Txn call completed" 95ms (18:43:52.764)
	Trace[705971869]: [592.295387ms] [592.295387ms] END
	
	
	==> kube-controller-manager [a0697c56404b] <==
	I0415 18:22:50.009242       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="326.398µs"
	I0415 18:22:50.048064       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="90.899µs"
	I0415 18:22:51.764868       1 node_lifecycle_controller.go:1045] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0415 18:22:52.188891       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="82.302µs"
	I0415 18:22:52.287692       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="37.246165ms"
	I0415 18:22:52.288055       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="224.505µs"
	I0415 18:22:52.333123       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="22.028652ms"
	I0415 18:22:52.333675       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="426.909µs"
	I0415 18:27:05.738408       1 event.go:376] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7fdf7869d9 to 3"
	I0415 18:27:05.789870       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-5w5x4"
	I0415 18:27:05.841032       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-8pgjv"
	I0415 18:27:05.849328       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-tk6sh"
	I0415 18:27:05.899441       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="160.526833ms"
	I0415 18:27:05.957239       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="57.716604ms"
	I0415 18:27:05.998341       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="40.497733ms"
	I0415 18:27:05.998579       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="95.3µs"
	I0415 18:27:09.211983       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="14.949061ms"
	I0415 18:27:09.212464       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="29.3µs"
	I0415 18:43:41.144789       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-653100-m03\" does not exist"
	I0415 18:43:41.155868       1 range_allocator.go:380] "Set node PodCIDR" node="ha-653100-m03" podCIDRs=["10.244.1.0/24"]
	I0415 18:43:41.176203       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rtbf9"
	I0415 18:43:41.176231       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-kvnct"
	I0415 18:43:42.027348       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-653100-m03"
	I0415 18:43:42.028227       1 event.go:376] "Event occurred" object="ha-653100-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-653100-m03 event: Registered Node ha-653100-m03 in Controller"
	I0415 18:44:02.707914       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-653100-m03"
	
	
	==> kube-proxy [ece5eb28b20b] <==
	I0415 18:22:38.391716       1 server_others.go:72] "Using iptables proxy"
	I0415 18:22:38.407680       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.63.147"]
	I0415 18:22:38.495319       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0415 18:22:38.495346       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0415 18:22:38.495361       1 server_others.go:168] "Using iptables Proxier"
	I0415 18:22:38.500785       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0415 18:22:38.501443       1 server.go:865] "Version info" version="v1.29.3"
	I0415 18:22:38.501468       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0415 18:22:38.503945       1 config.go:188] "Starting service config controller"
	I0415 18:22:38.504041       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0415 18:22:38.504268       1 config.go:97] "Starting endpoint slice config controller"
	I0415 18:22:38.504770       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0415 18:22:38.505829       1 config.go:315] "Starting node config controller"
	I0415 18:22:38.507970       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0415 18:22:38.605316       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0415 18:22:38.605583       1 shared_informer.go:318] Caches are synced for service config
	I0415 18:22:38.608238       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [d68da55f0f38] <==
	W0415 18:22:20.533571       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0415 18:22:20.533671       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0415 18:22:20.559089       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0415 18:22:20.559148       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0415 18:22:20.566941       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0415 18:22:20.569271       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0415 18:22:20.649432       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0415 18:22:20.649545       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0415 18:22:20.680518       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0415 18:22:20.681133       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0415 18:22:20.703015       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0415 18:22:20.703474       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0415 18:22:20.766338       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0415 18:22:20.766458       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0415 18:22:20.789649       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0415 18:22:20.790593       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0415 18:22:20.803334       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0415 18:22:20.804054       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0415 18:22:20.808728       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0415 18:22:20.809130       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0415 18:22:20.838937       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0415 18:22:20.841219       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0415 18:22:20.865287       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0415 18:22:20.865345       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0415 18:22:22.187395       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 15 18:40:24 ha-653100 kubelet[2226]: E0415 18:40:24.244272    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:40:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:40:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:40:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:40:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 18:41:24 ha-653100 kubelet[2226]: E0415 18:41:24.244403    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:41:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:41:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:41:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:41:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 18:42:24 ha-653100 kubelet[2226]: E0415 18:42:24.245239    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:42:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:42:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:42:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:42:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 18:43:24 ha-653100 kubelet[2226]: E0415 18:43:24.244761    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:43:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:43:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:43:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:43:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 18:44:24 ha-653100 kubelet[2226]: E0415 18:44:24.244027    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:44:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:44:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:44:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:44:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:44:56.621404    7900 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-653100 -n ha-653100
E0415 18:45:10.508270   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-653100 -n ha-653100: (12.9816193s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-653100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7fdf7869d9-8pgjv busybox-7fdf7869d9-tk6sh
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-653100 describe pod busybox-7fdf7869d9-8pgjv busybox-7fdf7869d9-tk6sh
helpers_test.go:282: (dbg) kubectl --context ha-653100 describe pod busybox-7fdf7869d9-8pgjv busybox-7fdf7869d9-tk6sh:

                                                
                                                
-- stdout --
	Name:             busybox-7fdf7869d9-8pgjv
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7fdf7869d9
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7fdf7869d9
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c4hn5 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-c4hn5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m56s (x4 over 18m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	
	
	Name:             busybox-7fdf7869d9-tk6sh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7fdf7869d9
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7fdf7869d9
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rjshx (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-rjshx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m56s (x4 over 18m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (285.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (56.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (20.3467173s)
ha_test.go:304: expected profile "ha-653100" in json of 'profile list' to include 4 nodes but have 3 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-653100\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-653100\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperv\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-653100\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"172.19.63.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"172.19.63.147\",\"Port\":8443,\"KubernetesVersion\
":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"172.19.63.104\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"172.19.51.108\",\"Port\":0,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":
false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"C:\\\\Users\\\\jenkins.minikube6:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"Disa
bleMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-windows-amd64.exe profile list --output json"
ha_test.go:307: expected profile "ha-653100" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-653100\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-653100\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperv\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1
,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-653100\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"172.19.63.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"172.19.63.147\",\"Port\":8443,\"Kuber
netesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"172.19.63.104\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"172.19.51.108\",\"Port\":0,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":f
alse,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"C:\\\\Users\\\\jenkins.minikube6:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\"
:false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-windows-amd64.exe profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-653100 -n ha-653100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-653100 -n ha-653100: (13.3235414s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterClusterStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-653100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-653100 logs -n 25: (8.9636532s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterClusterStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:38 UTC | 15 Apr 24 18:38 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:38 UTC | 15 Apr 24 18:38 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | busybox-7fdf7869d9-5w5x4 --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-8pgjv --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-tk6sh --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | busybox-7fdf7869d9-5w5x4 --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-8pgjv --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-tk6sh --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | busybox-7fdf7869d9-5w5x4 -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-8pgjv -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-tk6sh -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | busybox-7fdf7869d9-5w5x4             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-5w5x4 -- sh       |           |                   |                |                     |                     |
	|         | -c ping -c 1 172.19.48.1             |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-8pgjv             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-tk6sh             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| node    | add -p ha-653100 -v=7                | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:40 UTC | 15 Apr 24 18:44 UTC |
	|         | --alsologtostderr                    |           |                   |                |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 18:19:03
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 18:19:03.428900   10384 out.go:291] Setting OutFile to fd 956 ...
	I0415 18:19:03.429535   10384 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:19:03.429535   10384 out.go:304] Setting ErrFile to fd 892...
	I0415 18:19:03.429535   10384 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:19:03.456152   10384 out.go:298] Setting JSON to false
	I0415 18:19:03.460969   10384 start.go:129] hostinfo: {"hostname":"minikube6","uptime":16870,"bootTime":1713188273,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0415 18:19:03.460969   10384 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 18:19:03.468944   10384 out.go:177] * [ha-653100] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0415 18:19:03.471713   10384 notify.go:220] Checking for updates...
	I0415 18:19:03.474175   10384 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 18:19:03.479852   10384 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 18:19:03.482821   10384 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0415 18:19:03.485193   10384 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 18:19:03.488098   10384 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 18:19:03.491472   10384 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 18:19:09.177227   10384 out.go:177] * Using the hyperv driver based on user configuration
	I0415 18:19:09.180711   10384 start.go:297] selected driver: hyperv
	I0415 18:19:09.180711   10384 start.go:901] validating driver "hyperv" against <nil>
	I0415 18:19:09.180711   10384 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 18:19:09.231415   10384 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 18:19:09.233116   10384 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 18:19:09.233296   10384 cni.go:84] Creating CNI manager for ""
	I0415 18:19:09.233296   10384 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0415 18:19:09.233296   10384 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0415 18:19:09.233503   10384 start.go:340] cluster config:
	{Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-653100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 18:19:09.233896   10384 iso.go:125] acquiring lock: {Name:mkb11aac800c033551a31c7a773c0461f92e4459 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 18:19:09.237716   10384 out.go:177] * Starting "ha-653100" primary control-plane node in "ha-653100" cluster
	I0415 18:19:09.241624   10384 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:19:09.241887   10384 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 18:19:09.241939   10384 cache.go:56] Caching tarball of preloaded images
	I0415 18:19:09.242318   10384 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 18:19:09.242373   10384 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 18:19:09.243280   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:19:09.243280   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json: {Name:mk9fcf3e86096a1c3d878c2c5f55d5a5acd00e38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:19:09.244971   10384 start.go:360] acquireMachinesLock for ha-653100: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 18:19:09.244971   10384 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-653100"
	I0415 18:19:09.244971   10384 start.go:93] Provisioning new machine with config: &{Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-653100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 18:19:09.244971   10384 start.go:125] createHost starting for "" (driver="hyperv")
	I0415 18:19:09.247899   10384 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 18:19:09.247899   10384 start.go:159] libmachine.API.Create for "ha-653100" (driver="hyperv")
	I0415 18:19:09.247899   10384 client.go:168] LocalClient.Create starting
	I0415 18:19:09.248830   10384 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0415 18:19:09.249101   10384 main.go:141] libmachine: Decoding PEM data...
	I0415 18:19:09.249148   10384 main.go:141] libmachine: Parsing certificate...
	I0415 18:19:09.249148   10384 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0415 18:19:09.249148   10384 main.go:141] libmachine: Decoding PEM data...
	I0415 18:19:09.249148   10384 main.go:141] libmachine: Parsing certificate...
	I0415 18:19:09.249731   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0415 18:19:11.419777   10384 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0415 18:19:11.419777   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:11.420812   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0415 18:19:13.280108   10384 main.go:141] libmachine: [stdout =====>] : False
	
	I0415 18:19:13.280108   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:13.280637   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 18:19:14.855241   10384 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 18:19:14.855241   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:14.855241   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 18:19:18.733923   10384 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 18:19:18.734210   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:18.736243   10384 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0415 18:19:19.289879   10384 main.go:141] libmachine: Creating SSH key...
	I0415 18:19:19.400622   10384 main.go:141] libmachine: Creating VM...
	I0415 18:19:19.400622   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 18:19:22.473592   10384 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 18:19:22.473592   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:22.473592   10384 main.go:141] libmachine: Using switch "Default Switch"
	I0415 18:19:22.473592   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 18:19:24.358372   10384 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 18:19:24.358573   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:24.358573   10384 main.go:141] libmachine: Creating VHD
	I0415 18:19:24.358573   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\fixed.vhd' -SizeBytes 10MB -Fixed
	I0415 18:19:28.369440   10384 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 650E0F4D-34EC-4EE4-B011-F395B7FC2B3C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0415 18:19:28.369525   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:28.369525   10384 main.go:141] libmachine: Writing magic tar header
	I0415 18:19:28.369609   10384 main.go:141] libmachine: Writing SSH key tar header
	I0415 18:19:28.379115   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\disk.vhd' -VHDType Dynamic -DeleteSource
	I0415 18:19:31.701668   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:31.701668   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:31.702065   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\disk.vhd' -SizeBytes 20000MB
	I0415 18:19:34.409230   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:34.409287   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:34.409287   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-653100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0415 18:19:38.391213   10384 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-653100 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0415 18:19:38.391365   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:38.391448   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-653100 -DynamicMemoryEnabled $false
	I0415 18:19:40.850920   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:40.851446   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:40.851446   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-653100 -Count 2
	I0415 18:19:43.184748   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:43.185230   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:43.185314   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-653100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\boot2docker.iso'
	I0415 18:19:45.947867   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:45.947867   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:45.948906   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-653100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\disk.vhd'
	I0415 18:19:48.807697   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:48.807697   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:48.807697   10384 main.go:141] libmachine: Starting VM...
	I0415 18:19:48.808056   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-653100
	I0415 18:19:52.116173   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:52.117205   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:52.117205   10384 main.go:141] libmachine: Waiting for host to start...
	I0415 18:19:52.117276   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:19:54.557809   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:19:54.558376   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:54.558452   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:19:57.250722   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:57.250722   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:58.258291   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:00.584210   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:00.584210   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:00.584448   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:03.246620   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:20:03.247582   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:04.255962   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:06.600399   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:06.600459   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:06.600459   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:09.316612   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:20:09.316612   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:10.317022   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:12.741666   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:12.741972   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:12.742046   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:15.418020   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:20:15.418020   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:16.427460   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:18.790469   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:18.790783   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:18.790783   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:21.596566   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:21.596566   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:21.597345   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:23.951579   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:23.951579   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:23.951579   10384 machine.go:94] provisionDockerMachine start ...
	I0415 18:20:23.952606   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:26.247912   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:26.248135   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:26.248135   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:29.012297   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:29.012297   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:29.019039   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:20:29.032591   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:20:29.032673   10384 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 18:20:29.165965   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0415 18:20:29.165965   10384 buildroot.go:166] provisioning hostname "ha-653100"
	I0415 18:20:29.165965   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:31.462885   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:31.462885   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:31.462973   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:34.155427   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:34.156301   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:34.162944   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:20:34.163526   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:20:34.163526   10384 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-653100 && echo "ha-653100" | sudo tee /etc/hostname
	I0415 18:20:34.337418   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-653100
	
	I0415 18:20:34.337418   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:36.655518   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:36.655518   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:36.655812   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:39.380784   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:39.380784   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:39.389453   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:20:39.390401   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:20:39.390401   10384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-653100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-653100/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-653100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 18:20:39.543028   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 18:20:39.543028   10384 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0415 18:20:39.543028   10384 buildroot.go:174] setting up certificates
	I0415 18:20:39.543028   10384 provision.go:84] configureAuth start
	I0415 18:20:39.543611   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:41.851405   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:41.851611   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:41.851695   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:44.624640   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:44.624640   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:44.624640   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:46.878650   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:46.878650   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:46.879166   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:49.633681   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:49.633926   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:49.633926   10384 provision.go:143] copyHostCerts
	I0415 18:20:49.633926   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0415 18:20:49.634462   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0415 18:20:49.634462   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0415 18:20:49.635297   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0415 18:20:49.637549   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0415 18:20:49.637813   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0415 18:20:49.637813   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0415 18:20:49.637813   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0415 18:20:49.639233   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0415 18:20:49.639233   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0415 18:20:49.639233   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0415 18:20:49.639935   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0415 18:20:49.640957   10384 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-653100 san=[127.0.0.1 172.19.63.147 ha-653100 localhost minikube]
	I0415 18:20:49.905880   10384 provision.go:177] copyRemoteCerts
	I0415 18:20:49.922553   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 18:20:49.922553   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:52.259882   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:52.259882   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:52.259882   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:54.984473   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:54.984473   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:54.984987   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:20:55.101879   10384 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1791462s)
	I0415 18:20:55.101879   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0415 18:20:55.102059   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 18:20:55.153442   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0415 18:20:55.153917   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0415 18:20:55.199876   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0415 18:20:55.200448   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0415 18:20:55.254511   10384 provision.go:87] duration metric: took 15.7112643s to configureAuth
	I0415 18:20:55.254511   10384 buildroot.go:189] setting minikube options for container-runtime
	I0415 18:20:55.255352   10384 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:20:55.255474   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:57.547699   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:57.547699   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:57.547786   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:00.303241   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:00.303241   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:00.309852   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:00.310680   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:00.310680   10384 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 18:21:00.455641   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0415 18:21:00.455641   10384 buildroot.go:70] root file system type: tmpfs
	I0415 18:21:00.455641   10384 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 18:21:00.455641   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:02.740065   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:02.740065   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:02.740841   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:05.487209   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:05.487209   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:05.492437   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:05.493558   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:05.493558   10384 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 18:21:05.663243   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 18:21:05.663359   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:07.945804   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:07.946031   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:07.946031   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:10.668442   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:10.668442   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:10.674981   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:10.675100   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:10.675100   10384 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 18:21:12.959357   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0415 18:21:12.959357   10384 machine.go:97] duration metric: took 49.0073804s to provisionDockerMachine
	I0415 18:21:12.959357   10384 client.go:171] duration metric: took 2m3.7104605s to LocalClient.Create
	I0415 18:21:12.959357   10384 start.go:167] duration metric: took 2m3.7104605s to libmachine.API.Create "ha-653100"
	I0415 18:21:12.959357   10384 start.go:293] postStartSetup for "ha-653100" (driver="hyperv")
	I0415 18:21:12.959357   10384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 18:21:12.974666   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 18:21:12.974666   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:15.275980   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:15.275980   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:15.275980   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:18.019740   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:18.019762   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:18.019878   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:21:18.139960   10384 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1652527s)
	I0415 18:21:18.155380   10384 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 18:21:18.164559   10384 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 18:21:18.164559   10384 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0415 18:21:18.165434   10384 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0415 18:21:18.166112   10384 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> 112722.pem in /etc/ssl/certs
	I0415 18:21:18.166112   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /etc/ssl/certs/112722.pem
	I0415 18:21:18.180084   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 18:21:18.200844   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /etc/ssl/certs/112722.pem (1708 bytes)
	I0415 18:21:18.250132   10384 start.go:296] duration metric: took 5.2907331s for postStartSetup
	I0415 18:21:18.253937   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:20.531894   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:20.531894   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:20.531894   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:23.259067   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:23.259480   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:23.259754   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:21:23.262894   10384 start.go:128] duration metric: took 2m14.0167978s to createHost
	I0415 18:21:23.262950   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:25.573334   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:25.573334   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:25.573334   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:28.294984   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:28.295213   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:28.304032   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:28.304955   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:28.304955   10384 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0415 18:21:28.441121   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713205288.448859419
	
	I0415 18:21:28.441191   10384 fix.go:216] guest clock: 1713205288.448859419
	I0415 18:21:28.441191   10384 fix.go:229] Guest: 2024-04-15 18:21:28.448859419 +0000 UTC Remote: 2024-04-15 18:21:23.2629505 +0000 UTC m=+140.027670501 (delta=5.185908919s)
	I0415 18:21:28.441272   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:30.726887   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:30.727164   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:30.727164   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:33.517730   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:33.518861   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:33.525281   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:33.525856   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:33.525856   10384 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713205288
	I0415 18:21:33.684173   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 15 18:21:28 UTC 2024
	
	I0415 18:21:33.684173   10384 fix.go:236] clock set: Mon Apr 15 18:21:28 UTC 2024
	 (err=<nil>)
	I0415 18:21:33.684173   10384 start.go:83] releasing machines lock for "ha-653100", held for 2m24.4380391s
	I0415 18:21:33.684173   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:35.959004   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:35.959004   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:35.959004   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:38.693038   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:38.693586   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:38.698246   10384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 18:21:38.698432   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:38.709918   10384 ssh_runner.go:195] Run: cat /version.json
	I0415 18:21:38.709918   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:41.102451   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:43.920818   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:43.920972   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:43.921214   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:21:43.967273   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:43.967331   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:43.967331   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:21:44.091517   10384 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3921768s)
	I0415 18:21:44.091595   10384 ssh_runner.go:235] Completed: cat /version.json: (5.3815555s)
	I0415 18:21:44.105965   10384 ssh_runner.go:195] Run: systemctl --version
	I0415 18:21:44.128397   10384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0415 18:21:44.135680   10384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 18:21:44.149066   10384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 18:21:44.177790   10384 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 18:21:44.177790   10384 start.go:494] detecting cgroup driver to use...
	I0415 18:21:44.177790   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:21:44.228163   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0415 18:21:44.262529   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 18:21:44.285370   10384 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 18:21:44.301154   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 18:21:44.336472   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:21:44.370998   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 18:21:44.404889   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:21:44.438672   10384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 18:21:44.473968   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 18:21:44.507568   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 18:21:44.541278   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 18:21:44.574748   10384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 18:21:44.615798   10384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 18:21:44.656765   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:21:44.866329   10384 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 18:21:44.902355   10384 start.go:494] detecting cgroup driver to use...
	I0415 18:21:44.917364   10384 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0415 18:21:44.958576   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:21:44.995083   10384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 18:21:45.045436   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:21:45.084274   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:21:45.126708   10384 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0415 18:21:45.197837   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:21:45.224449   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:21:45.274212   10384 ssh_runner.go:195] Run: which cri-dockerd
	I0415 18:21:45.295670   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0415 18:21:45.317816   10384 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0415 18:21:45.364867   10384 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0415 18:21:45.594504   10384 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0415 18:21:45.794998   10384 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0415 18:21:45.795406   10384 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0415 18:21:45.851288   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:21:46.067106   10384 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 18:21:48.625712   10384 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5574711s)
	I0415 18:21:48.640151   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0415 18:21:48.681058   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 18:21:48.721545   10384 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0415 18:21:48.945328   10384 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0415 18:21:49.172462   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:21:49.400402   10384 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0415 18:21:49.448539   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 18:21:49.489496   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:21:49.703253   10384 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0415 18:21:49.816658   10384 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0415 18:21:49.830904   10384 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0415 18:21:49.840743   10384 start.go:562] Will wait 60s for crictl version
	I0415 18:21:49.855288   10384 ssh_runner.go:195] Run: which crictl
	I0415 18:21:49.875869   10384 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0415 18:21:49.936713   10384 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0415 18:21:49.947981   10384 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 18:21:49.993965   10384 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 18:21:50.032420   10384 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0415 18:21:50.032553   10384 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0415 18:21:50.037021   10384 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0415 18:21:50.037021   10384 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0415 18:21:50.037021   10384 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0415 18:21:50.037021   10384 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:d7:0e Flags:up|broadcast|multicast|running}
	I0415 18:21:50.039971   10384 ip.go:210] interface addr: fe80::6b0:6318:bc6e:fcda/64
	I0415 18:21:50.039971   10384 ip.go:210] interface addr: 172.19.48.1/20
	I0415 18:21:50.056064   10384 ssh_runner.go:195] Run: grep 172.19.48.1	host.minikube.internal$ /etc/hosts
	I0415 18:21:50.062649   10384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 18:21:50.097930   10384 kubeadm.go:877] updating cluster {Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:ha-653100 Namespace:default APIServerHAVIP:172.19.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.63.147 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0415 18:21:50.097930   10384 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:21:50.108473   10384 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 18:21:50.131644   10384 docker.go:685] Got preloaded images: 
	I0415 18:21:50.132600   10384 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0415 18:21:50.146104   10384 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0415 18:21:50.181885   10384 ssh_runner.go:195] Run: which lz4
	I0415 18:21:50.188111   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0415 18:21:50.202072   10384 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0415 18:21:50.209107   10384 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0415 18:21:50.209107   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0415 18:21:52.413614   10384 docker.go:649] duration metric: took 2.2254854s to copy over tarball
	I0415 18:21:52.429279   10384 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0415 18:22:01.379987   10384 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.9504893s)
	I0415 18:22:01.379987   10384 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0415 18:22:01.455511   10384 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0415 18:22:01.477182   10384 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0415 18:22:01.536289   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:22:01.768214   10384 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 18:22:04.398301   10384 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6300657s)
	I0415 18:22:04.408551   10384 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 18:22:04.433417   10384 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0415 18:22:04.433417   10384 cache_images.go:84] Images are preloaded, skipping loading
	I0415 18:22:04.433417   10384 kubeadm.go:928] updating node { 172.19.63.147 8443 v1.29.3 docker true true} ...
	I0415 18:22:04.433417   10384 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-653100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.63.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-653100 Namespace:default APIServerHAVIP:172.19.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0415 18:22:04.444220   10384 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0415 18:22:04.490342   10384 cni.go:84] Creating CNI manager for ""
	I0415 18:22:04.490402   10384 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 18:22:04.490472   10384 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0415 18:22:04.490526   10384 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.63.147 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-653100 NodeName:ha-653100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.63.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.63.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0415 18:22:04.490735   10384 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.63.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-653100"
	  kubeletExtraArgs:
	    node-ip: 172.19.63.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.63.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0415 18:22:04.490884   10384 kube-vip.go:111] generating kube-vip config ...
	I0415 18:22:04.505496   10384 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0415 18:22:04.536495   10384 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0415 18:22:04.536752   10384 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.63.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0415 18:22:04.551207   10384 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0415 18:22:04.567905   10384 binaries.go:44] Found k8s binaries, skipping transfer
	I0415 18:22:04.582348   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0415 18:22:04.604171   10384 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0415 18:22:04.646000   10384 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0415 18:22:04.692832   10384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0415 18:22:04.728604   10384 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1351 bytes)
	I0415 18:22:04.775922   10384 ssh_runner.go:195] Run: grep 172.19.63.254	control-plane.minikube.internal$ /etc/hosts
	I0415 18:22:04.783742   10384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.63.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 18:22:04.822733   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:22:05.055746   10384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 18:22:05.087598   10384 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100 for IP: 172.19.63.147
	I0415 18:22:05.087652   10384 certs.go:194] generating shared ca certs ...
	I0415 18:22:05.087652   10384 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.088303   10384 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0415 18:22:05.088303   10384 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0415 18:22:05.088915   10384 certs.go:256] generating profile certs ...
	I0415 18:22:05.089546   10384 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.key
	I0415 18:22:05.089739   10384 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.crt with IP's: []
	I0415 18:22:05.327013   10384 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.crt ...
	I0415 18:22:05.328010   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.crt: {Name:mka413e653e113856769234a348385e515e46303 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.329372   10384 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.key ...
	I0415 18:22:05.329372   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.key: {Name:mk12a79d6acd7fec5ddd98754bb23ab16e83b4f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.330112   10384 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key.c151ec5c
	I0415 18:22:05.331447   10384 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt.c151ec5c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.63.147 172.19.63.254]
	I0415 18:22:05.565428   10384 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt.c151ec5c ...
	I0415 18:22:05.565428   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt.c151ec5c: {Name:mk5c523ee813d33697660e99fb5da48b385701b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.567434   10384 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key.c151ec5c ...
	I0415 18:22:05.567434   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key.c151ec5c: {Name:mkeadeed87d8879714bf8100a4229bec1246f570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.568511   10384 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt.c151ec5c -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt
	I0415 18:22:05.585425   10384 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key.c151ec5c -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key
	I0415 18:22:05.586963   10384 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key
	I0415 18:22:05.587129   10384 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt with IP's: []
	I0415 18:22:05.748042   10384 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt ...
	I0415 18:22:05.749020   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt: {Name:mk92c7defdccaf790f51e1080d3836b064a3ba9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.749736   10384 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key ...
	I0415 18:22:05.749736   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key: {Name:mk071663552007da34f935841f25d643d746d544 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.751078   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0415 18:22:05.751078   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0415 18:22:05.751078   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0415 18:22:05.751078   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0415 18:22:05.752108   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0415 18:22:05.752265   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0415 18:22:05.752517   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0415 18:22:05.761320   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0415 18:22:05.761625   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem (1338 bytes)
	W0415 18:22:05.762397   10384 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272_empty.pem, impossibly tiny 0 bytes
	I0415 18:22:05.762397   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0415 18:22:05.762397   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0415 18:22:05.763315   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0415 18:22:05.763315   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0415 18:22:05.764136   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem (1708 bytes)
	I0415 18:22:05.764433   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /usr/share/ca-certificates/112722.pem
	I0415 18:22:05.764684   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:22:05.764840   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem -> /usr/share/ca-certificates/11272.pem
	I0415 18:22:05.766228   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0415 18:22:05.818285   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0415 18:22:05.869100   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0415 18:22:05.927943   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0415 18:22:05.982236   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0415 18:22:06.033436   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0415 18:22:06.088918   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0415 18:22:06.140228   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0415 18:22:06.194914   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /usr/share/ca-certificates/112722.pem (1708 bytes)
	I0415 18:22:06.244585   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0415 18:22:06.295695   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem --> /usr/share/ca-certificates/11272.pem (1338 bytes)
	I0415 18:22:06.348962   10384 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0415 18:22:06.398272   10384 ssh_runner.go:195] Run: openssl version
	I0415 18:22:06.422630   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112722.pem && ln -fs /usr/share/ca-certificates/112722.pem /etc/ssl/certs/112722.pem"
	I0415 18:22:06.459842   10384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112722.pem
	I0415 18:22:06.467290   10384 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 17:58 /usr/share/ca-certificates/112722.pem
	I0415 18:22:06.480612   10384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112722.pem
	I0415 18:22:06.503535   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112722.pem /etc/ssl/certs/3ec20f2e.0"
	I0415 18:22:06.538561   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0415 18:22:06.572574   10384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:22:06.580950   10384 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:22:06.595127   10384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:22:06.618634   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0415 18:22:06.655478   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11272.pem && ln -fs /usr/share/ca-certificates/11272.pem /etc/ssl/certs/11272.pem"
	I0415 18:22:06.690402   10384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11272.pem
	I0415 18:22:06.698649   10384 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 17:58 /usr/share/ca-certificates/11272.pem
	I0415 18:22:06.712709   10384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11272.pem
	I0415 18:22:06.735899   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11272.pem /etc/ssl/certs/51391683.0"
	I0415 18:22:06.771243   10384 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 18:22:06.778754   10384 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0415 18:22:06.779215   10384 kubeadm.go:391] StartCluster: {Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clu
sterName:ha-653100 Namespace:default APIServerHAVIP:172.19.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.63.147 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 18:22:06.790653   10384 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0415 18:22:06.830974   10384 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0415 18:22:06.866829   10384 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0415 18:22:06.900593   10384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0415 18:22:06.925579   10384 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0415 18:22:06.925579   10384 kubeadm.go:156] found existing configuration files:
	
	I0415 18:22:06.940209   10384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0415 18:22:06.959148   10384 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0415 18:22:06.975145   10384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0415 18:22:07.014822   10384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0415 18:22:07.031944   10384 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0415 18:22:07.045919   10384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0415 18:22:07.081479   10384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0415 18:22:07.104063   10384 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0415 18:22:07.117753   10384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0415 18:22:07.151118   10384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0415 18:22:07.171678   10384 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0415 18:22:07.187200   10384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0415 18:22:07.206408   10384 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0415 18:22:07.712971   10384 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0415 18:22:24.172226   10384 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0415 18:22:24.172397   10384 kubeadm.go:309] [preflight] Running pre-flight checks
	I0415 18:22:24.172431   10384 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0415 18:22:24.172431   10384 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0415 18:22:24.172431   10384 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0415 18:22:24.173023   10384 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0415 18:22:24.177821   10384 out.go:204]   - Generating certificates and keys ...
	I0415 18:22:24.178357   10384 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0415 18:22:24.178482   10384 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-653100 localhost] and IPs [172.19.63.147 127.0.0.1 ::1]
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-653100 localhost] and IPs [172.19.63.147 127.0.0.1 ::1]
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0415 18:22:24.180240   10384 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0415 18:22:24.186302   10384 out.go:204]   - Booting up control plane ...
	I0415 18:22:24.187251   10384 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0415 18:22:24.188243   10384 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0415 18:22:24.188243   10384 kubeadm.go:309] [apiclient] All control plane components are healthy after 9.567962 seconds
	I0415 18:22:24.188243   10384 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0415 18:22:24.188243   10384 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0415 18:22:24.188243   10384 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0415 18:22:24.189243   10384 kubeadm.go:309] [mark-control-plane] Marking the node ha-653100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0415 18:22:24.189243   10384 kubeadm.go:309] [bootstrap-token] Using token: huvy89.hhqbdqsl75p9l7b4
	I0415 18:22:24.194248   10384 out.go:204]   - Configuring RBAC rules ...
	I0415 18:22:24.194248   10384 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0415 18:22:24.194248   10384 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0415 18:22:24.194248   10384 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0415 18:22:24.195682   10384 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0415 18:22:24.195682   10384 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0415 18:22:24.195682   10384 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0415 18:22:24.196372   10384 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0415 18:22:24.196724   10384 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0415 18:22:24.196838   10384 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0415 18:22:24.196838   10384 kubeadm.go:309] 
	I0415 18:22:24.196838   10384 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0415 18:22:24.197084   10384 kubeadm.go:309] 
	I0415 18:22:24.197248   10384 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0415 18:22:24.197248   10384 kubeadm.go:309] 
	I0415 18:22:24.197248   10384 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0415 18:22:24.197432   10384 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0415 18:22:24.197611   10384 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0415 18:22:24.197611   10384 kubeadm.go:309] 
	I0415 18:22:24.197611   10384 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0415 18:22:24.197611   10384 kubeadm.go:309] 
	I0415 18:22:24.197611   10384 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0415 18:22:24.197611   10384 kubeadm.go:309] 
	I0415 18:22:24.197611   10384 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0415 18:22:24.197611   10384 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0415 18:22:24.197611   10384 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0415 18:22:24.198307   10384 kubeadm.go:309] 
	I0415 18:22:24.198307   10384 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0415 18:22:24.198307   10384 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0415 18:22:24.198307   10384 kubeadm.go:309] 
	I0415 18:22:24.198307   10384 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token huvy89.hhqbdqsl75p9l7b4 \
	I0415 18:22:24.198307   10384 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:64433043496e1f3289bbcb278fdd01bdcea4f93c3a13feee16fa4e3db23f59c2 \
	I0415 18:22:24.198307   10384 kubeadm.go:309] 	--control-plane 
	I0415 18:22:24.198307   10384 kubeadm.go:309] 
	I0415 18:22:24.198307   10384 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0415 18:22:24.198307   10384 kubeadm.go:309] 
	I0415 18:22:24.199302   10384 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token huvy89.hhqbdqsl75p9l7b4 \
	I0415 18:22:24.199302   10384 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:64433043496e1f3289bbcb278fdd01bdcea4f93c3a13feee16fa4e3db23f59c2 
	I0415 18:22:24.199302   10384 cni.go:84] Creating CNI manager for ""
	I0415 18:22:24.199302   10384 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 18:22:24.203263   10384 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0415 18:22:24.221247   10384 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0415 18:22:24.229824   10384 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0415 18:22:24.229824   10384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0415 18:22:24.323407   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0415 18:22:25.047319   10384 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0415 18:22:25.062350   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-653100 minikube.k8s.io/updated_at=2024_04_15T18_22_25_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c minikube.k8s.io/name=ha-653100 minikube.k8s.io/primary=true
	I0415 18:22:25.062350   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:25.070326   10384 ops.go:34] apiserver oom_adj: -16
	I0415 18:22:25.284655   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:25.790456   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:26.293504   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:26.795443   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:27.298654   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:27.786190   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:28.286860   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:28.788050   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:29.292845   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:29.794080   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:30.300169   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:30.788471   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:31.295339   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:31.798627   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:32.299958   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:32.791784   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:33.289567   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:33.791349   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:34.295367   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:34.804275   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:35.290745   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:35.794796   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:36.294136   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:36.799771   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:37.029000   10384 kubeadm.go:1107] duration metric: took 11.9815852s to wait for elevateKubeSystemPrivileges
	W0415 18:22:37.029063   10384 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0415 18:22:37.029138   10384 kubeadm.go:393] duration metric: took 30.249681s to StartCluster
	I0415 18:22:37.029138   10384 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:37.029339   10384 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 18:22:37.031101   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:37.032659   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0415 18:22:37.032659   10384 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0415 18:22:37.032732   10384 addons.go:69] Setting storage-provisioner=true in profile "ha-653100"
	I0415 18:22:37.032806   10384 addons.go:234] Setting addon storage-provisioner=true in "ha-653100"
	I0415 18:22:37.032841   10384 addons.go:69] Setting default-storageclass=true in profile "ha-653100"
	I0415 18:22:37.032891   10384 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-653100"
	I0415 18:22:37.032987   10384 host.go:66] Checking if "ha-653100" exists ...
	I0415 18:22:37.032579   10384 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.19.63.147 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 18:22:37.033266   10384 start.go:240] waiting for startup goroutines ...
	I0415 18:22:37.033382   10384 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:22:37.033632   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:37.034694   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:37.253650   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0415 18:22:37.698002   10384 start.go:946] {"host.minikube.internal": 172.19.48.1} host record injected into CoreDNS's ConfigMap
	I0415 18:22:39.456374   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:39.456374   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:39.459088   10384 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 18:22:39.457089   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:39.461772   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:39.461772   10384 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 18:22:39.461772   10384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0415 18:22:39.462029   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:39.463267   10384 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 18:22:39.464063   10384 kapi.go:59] client config for ha-653100: &rest.Config{Host:"https://172.19.63.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-653100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-653100\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f71600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0415 18:22:39.466136   10384 cert_rotation.go:137] Starting client certificate rotation controller
	I0415 18:22:39.466794   10384 addons.go:234] Setting addon default-storageclass=true in "ha-653100"
	I0415 18:22:39.466794   10384 host.go:66] Checking if "ha-653100" exists ...
	I0415 18:22:39.466794   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:41.909905   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:41.909905   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:41.909905   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:22:41.955061   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:41.955061   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:41.955625   10384 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0415 18:22:41.955711   10384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0415 18:22:41.955711   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:44.377012   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:44.377012   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:44.377984   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:22:44.805425   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:22:44.805425   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:44.805425   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:22:44.974779   10384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 18:22:47.155103   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:22:47.155103   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:47.156316   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:22:47.304965   10384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0415 18:22:47.473026   10384 round_trippers.go:463] GET https://172.19.63.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0415 18:22:47.473026   10384 round_trippers.go:469] Request Headers:
	I0415 18:22:47.473026   10384 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:22:47.473026   10384 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:22:47.488496   10384 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0415 18:22:47.490408   10384 round_trippers.go:463] PUT https://172.19.63.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0415 18:22:47.490526   10384 round_trippers.go:469] Request Headers:
	I0415 18:22:47.490526   10384 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:22:47.490526   10384 round_trippers.go:473]     Content-Type: application/json
	I0415 18:22:47.490526   10384 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:22:47.494518   10384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:22:47.498597   10384 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0415 18:22:47.501457   10384 addons.go:505] duration metric: took 10.468136s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0415 18:22:47.501457   10384 start.go:245] waiting for cluster config update ...
	I0415 18:22:47.501457   10384 start.go:254] writing updated cluster config ...
	I0415 18:22:47.503984   10384 out.go:177] 
	I0415 18:22:47.513974   10384 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:22:47.513974   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:22:47.518979   10384 out.go:177] * Starting "ha-653100-m02" control-plane node in "ha-653100" cluster
	I0415 18:22:47.524981   10384 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:22:47.524981   10384 cache.go:56] Caching tarball of preloaded images
	I0415 18:22:47.526030   10384 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 18:22:47.526235   10384 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 18:22:47.526401   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:22:47.528481   10384 start.go:360] acquireMachinesLock for ha-653100-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 18:22:47.528921   10384 start.go:364] duration metric: took 121.6µs to acquireMachinesLock for "ha-653100-m02"
	I0415 18:22:47.529077   10384 start.go:93] Provisioning new machine with config: &{Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-653100 Namespace:default APIServerHAVIP:172.19.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.63.147 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 18:22:47.529280   10384 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0415 18:22:47.540485   10384 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 18:22:47.541556   10384 start.go:159] libmachine.API.Create for "ha-653100" (driver="hyperv")
	I0415 18:22:47.541556   10384 client.go:168] LocalClient.Create starting
	I0415 18:22:47.542079   10384 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0415 18:22:47.542415   10384 main.go:141] libmachine: Decoding PEM data...
	I0415 18:22:47.542415   10384 main.go:141] libmachine: Parsing certificate...
	I0415 18:22:47.542700   10384 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0415 18:22:47.542700   10384 main.go:141] libmachine: Decoding PEM data...
	I0415 18:22:47.542700   10384 main.go:141] libmachine: Parsing certificate...
	I0415 18:22:47.542700   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0415 18:22:49.574978   10384 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0415 18:22:49.574978   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:49.576110   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0415 18:22:51.479178   10384 main.go:141] libmachine: [stdout =====>] : False
	
	I0415 18:22:51.479178   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:51.479600   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 18:22:53.065829   10384 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 18:22:53.066593   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:53.066593   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 18:22:57.052062   10384 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 18:22:57.052234   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:57.055252   10384 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0415 18:22:57.583068   10384 main.go:141] libmachine: Creating SSH key...
	I0415 18:22:57.931279   10384 main.go:141] libmachine: Creating VM...
	I0415 18:22:57.931279   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 18:23:01.081349   10384 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 18:23:01.081349   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:01.082298   10384 main.go:141] libmachine: Using switch "Default Switch"
	I0415 18:23:01.082375   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 18:23:02.972464   10384 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 18:23:02.972464   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:02.972464   10384 main.go:141] libmachine: Creating VHD
	I0415 18:23:02.973018   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0415 18:23:06.989219   10384 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : DEE7E17F-5E93-468C-BA30-08390D1CA178
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0415 18:23:06.989219   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:06.989219   10384 main.go:141] libmachine: Writing magic tar header
	I0415 18:23:06.989219   10384 main.go:141] libmachine: Writing SSH key tar header
	I0415 18:23:06.990286   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0415 18:23:10.344718   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:10.344718   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:10.344872   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\disk.vhd' -SizeBytes 20000MB
	I0415 18:23:13.048066   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:13.048981   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:13.049137   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-653100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0415 18:23:17.000979   10384 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-653100-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0415 18:23:17.000979   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:17.001667   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-653100-m02 -DynamicMemoryEnabled $false
	I0415 18:23:19.529184   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:19.529184   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:19.529184   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-653100-m02 -Count 2
	I0415 18:23:21.929952   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:21.929952   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:21.930071   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-653100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\boot2docker.iso'
	I0415 18:23:24.786919   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:24.786919   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:24.786919   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-653100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\disk.vhd'
	I0415 18:23:27.665809   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:27.665809   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:27.665809   10384 main.go:141] libmachine: Starting VM...
	I0415 18:23:27.666001   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-653100-m02
	I0415 18:23:31.102209   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:31.103144   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:31.103144   10384 main.go:141] libmachine: Waiting for host to start...
	I0415 18:23:31.103144   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:33.569054   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:33.569054   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:33.569054   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:23:36.303048   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:36.303048   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:37.312865   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:39.749364   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:39.749620   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:39.749702   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:23:42.512466   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:42.512842   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:43.518477   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:45.904872   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:45.904872   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:45.905633   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:23:48.594507   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:48.594669   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:49.606615   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:51.980362   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:51.981179   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:51.981179   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:23:54.737668   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:54.738407   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:55.749257   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:58.134602   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:58.134602   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:58.135468   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:00.918915   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:00.919329   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:00.919408   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:03.202618   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:03.202618   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:03.202618   10384 machine.go:94] provisionDockerMachine start ...
	I0415 18:24:03.202618   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:05.548511   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:05.548511   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:05.549191   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:08.289644   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:08.290567   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:08.299809   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:08.300714   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:08.300714   10384 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 18:24:08.446422   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0415 18:24:08.446972   10384 buildroot.go:166] provisioning hostname "ha-653100-m02"
	I0415 18:24:08.446972   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:10.773426   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:10.773426   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:10.773426   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:13.530172   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:13.530172   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:13.536850   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:13.537708   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:13.537708   10384 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-653100-m02 && echo "ha-653100-m02" | sudo tee /etc/hostname
	I0415 18:24:13.707716   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-653100-m02
	
	I0415 18:24:13.707716   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:16.005330   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:16.005330   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:16.005330   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:18.762850   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:18.762850   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:18.770232   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:18.770901   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:18.770901   10384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-653100-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-653100-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-653100-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 18:24:18.936615   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 18:24:18.936615   10384 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0415 18:24:18.937152   10384 buildroot.go:174] setting up certificates
	I0415 18:24:18.937207   10384 provision.go:84] configureAuth start
	I0415 18:24:18.937207   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:21.299996   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:21.299996   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:21.300197   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:24.133316   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:24.134096   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:24.134153   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:26.489254   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:26.489254   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:26.489549   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:29.236160   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:29.236160   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:29.236234   10384 provision.go:143] copyHostCerts
	I0415 18:24:29.236417   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0415 18:24:29.236539   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0415 18:24:29.236539   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0415 18:24:29.237340   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0415 18:24:29.238595   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0415 18:24:29.238972   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0415 18:24:29.238972   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0415 18:24:29.239408   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0415 18:24:29.240639   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0415 18:24:29.240835   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0415 18:24:29.240835   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0415 18:24:29.241419   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0415 18:24:29.242408   10384 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-653100-m02 san=[127.0.0.1 172.19.63.104 ha-653100-m02 localhost minikube]
	I0415 18:24:29.398831   10384 provision.go:177] copyRemoteCerts
	I0415 18:24:29.412927   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 18:24:29.412927   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:31.723514   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:31.723616   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:31.723616   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:34.496654   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:34.496654   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:34.497398   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:24:34.615182   10384 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.2022138s)
	I0415 18:24:34.615182   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0415 18:24:34.615849   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 18:24:34.668445   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0415 18:24:34.668971   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0415 18:24:34.720499   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0415 18:24:34.721156   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0415 18:24:34.770381   10384 provision.go:87] duration metric: took 15.8330476s to configureAuth
	I0415 18:24:34.770381   10384 buildroot.go:189] setting minikube options for container-runtime
	I0415 18:24:34.770381   10384 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:24:34.770381   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:37.079755   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:37.080689   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:37.080689   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:39.859679   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:39.859754   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:39.866117   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:39.866820   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:39.866820   10384 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 18:24:40.015731   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0415 18:24:40.015731   10384 buildroot.go:70] root file system type: tmpfs
	I0415 18:24:40.015731   10384 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 18:24:40.015731   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:42.404944   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:42.405443   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:42.405443   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:45.210326   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:45.210813   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:45.216335   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:45.216939   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:45.216939   10384 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.63.147"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 18:24:45.394927   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.63.147
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 18:24:45.395706   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:47.711900   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:47.712499   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:47.712595   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:50.491344   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:50.491344   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:50.502173   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:50.502173   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:50.502173   10384 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 18:24:52.836243   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0415 18:24:52.836243   10384 machine.go:97] duration metric: took 49.6332282s to provisionDockerMachine
	I0415 18:24:52.836243   10384 client.go:171] duration metric: took 2m5.2936865s to LocalClient.Create
	I0415 18:24:52.836243   10384 start.go:167] duration metric: took 2m5.2936865s to libmachine.API.Create "ha-653100"
	I0415 18:24:52.836243   10384 start.go:293] postStartSetup for "ha-653100-m02" (driver="hyperv")
	I0415 18:24:52.836243   10384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 18:24:52.850899   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 18:24:52.851896   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:55.199036   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:55.199775   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:55.199775   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:58.012510   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:58.013353   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:58.013914   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:24:58.132196   10384 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.2802026s)
	I0415 18:24:58.147452   10384 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 18:24:58.154532   10384 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 18:24:58.154532   10384 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0415 18:24:58.155095   10384 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0415 18:24:58.156186   10384 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> 112722.pem in /etc/ssl/certs
	I0415 18:24:58.156186   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /etc/ssl/certs/112722.pem
	I0415 18:24:58.170256   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 18:24:58.189873   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /etc/ssl/certs/112722.pem (1708 bytes)
	I0415 18:24:58.243032   10384 start.go:296] duration metric: took 5.4067454s for postStartSetup
	I0415 18:24:58.246437   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:00.550399   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:00.550399   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:00.550894   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:03.289044   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:03.289044   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:03.289835   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:25:03.292186   10384 start.go:128] duration metric: took 2m15.7618211s to createHost
	I0415 18:25:03.292186   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:05.668753   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:05.668753   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:05.668966   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:08.439658   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:08.439658   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:08.447000   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:25:08.447864   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:25:08.447864   10384 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0415 18:25:08.589758   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713205508.597287833
	
	I0415 18:25:08.589758   10384 fix.go:216] guest clock: 1713205508.597287833
	I0415 18:25:08.589758   10384 fix.go:229] Guest: 2024-04-15 18:25:08.597287833 +0000 UTC Remote: 2024-04-15 18:25:03.2921862 +0000 UTC m=+360.055147501 (delta=5.305101633s)
	I0415 18:25:08.590328   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:10.915118   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:10.915118   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:10.916067   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:13.650013   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:13.650612   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:13.656497   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:25:13.657104   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:25:13.657182   10384 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713205508
	I0415 18:25:13.813133   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 15 18:25:08 UTC 2024
	
	I0415 18:25:13.813133   10384 fix.go:236] clock set: Mon Apr 15 18:25:08 UTC 2024
	 (err=<nil>)
	I0415 18:25:13.813133   10384 start.go:83] releasing machines lock for "ha-653100-m02", held for 2m26.2829576s
	I0415 18:25:13.813133   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:16.141194   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:16.141380   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:16.141380   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:18.957495   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:18.957495   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:18.960756   10384 out.go:177] * Found network options:
	I0415 18:25:18.964431   10384 out.go:177]   - NO_PROXY=172.19.63.147
	W0415 18:25:18.966627   10384 proxy.go:119] fail to check proxy env: Error ip not in block
	I0415 18:25:18.969406   10384 out.go:177]   - NO_PROXY=172.19.63.147
	W0415 18:25:18.972226   10384 proxy.go:119] fail to check proxy env: Error ip not in block
	W0415 18:25:18.975235   10384 proxy.go:119] fail to check proxy env: Error ip not in block
	I0415 18:25:18.977840   10384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 18:25:18.977840   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:18.990793   10384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0415 18:25:18.990793   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:21.355429   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:21.355429   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:21.355429   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:21.374654   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:21.374654   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:21.374654   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:24.278775   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:24.279572   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:24.280405   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:25:24.306668   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:24.306668   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:24.308123   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:25:24.386474   10384 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.3956377s)
	W0415 18:25:24.386474   10384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 18:25:24.404866   10384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 18:25:24.481327   10384 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 18:25:24.481327   10384 start.go:494] detecting cgroup driver to use...
	I0415 18:25:24.481327   10384 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.5034427s)
	I0415 18:25:24.481327   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:25:24.536359   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0415 18:25:24.572347   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 18:25:24.593352   10384 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 18:25:24.610729   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 18:25:24.650456   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:25:24.693297   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 18:25:24.730594   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:25:24.771078   10384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 18:25:24.812358   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 18:25:24.854948   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 18:25:24.893956   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 18:25:24.934484   10384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 18:25:24.974849   10384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 18:25:25.012928   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:25:25.269094   10384 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 18:25:25.319374   10384 start.go:494] detecting cgroup driver to use...
	I0415 18:25:25.334757   10384 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0415 18:25:25.382030   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:25:25.422509   10384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 18:25:25.496212   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:25:25.539556   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:25:25.586254   10384 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0415 18:25:25.665807   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:25:25.697619   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:25:25.754485   10384 ssh_runner.go:195] Run: which cri-dockerd
	I0415 18:25:25.776463   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0415 18:25:25.798310   10384 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0415 18:25:25.849027   10384 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0415 18:25:26.103040   10384 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0415 18:25:26.311089   10384 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0415 18:25:26.311089   10384 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0415 18:25:26.371946   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:25:26.596000   10384 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 18:26:27.765978   10384 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1694886s)
	I0415 18:26:27.781002   10384 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0415 18:26:27.817233   10384 out.go:177] 
	W0415 18:26:27.820189   10384 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 15 18:24:51 ha-653100-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 15 18:24:51 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:51.175281888Z" level=info msg="Starting up"
	Apr 15 18:24:51 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:51.176817321Z" level=info msg="containerd not running, starting managed containerd"
	Apr 15 18:24:51 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:51.181288215Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.216362257Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243075421Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243180523Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243245725Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243263625Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243358927Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243375528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243544331Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243714535Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243739035Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243751135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243859138Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.244478651Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.247680919Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.247787921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248037026Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248177629Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248295531Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248444935Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248541437Z" level=info msg="metadata content store policy set" policy=shared
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279315587Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279443690Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279651894Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279764797Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279791497Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.280197206Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.280884220Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281341330Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281485733Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281516134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281561035Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281615936Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281641736Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281663737Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281686937Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281709538Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281727638Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281747238Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281777139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281801640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281822540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281844040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281864141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281895342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282030744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282122446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282152747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282178548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282205748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282227849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282250949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282279750Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282310750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282329151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282347551Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282407752Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282432753Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282447653Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282465554Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282584456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282620757Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282637557Z" level=info msg="NRI interface is disabled by configuration."
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.283743481Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.283842283Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.283903984Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.284335093Z" level=info msg="containerd successfully booted in 0.071116s"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.254240790Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.289190582Z" level=info msg="Loading containers: start."
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.609124512Z" level=info msg="Loading containers: done."
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.636265777Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.636518080Z" level=info msg="Daemon has completed initialization"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.840822625Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.841084828Z" level=info msg="API listen on [::]:2376"
	Apr 15 18:24:52 ha-653100-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 15 18:25:26 ha-653100-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.632253775Z" level=info msg="Processing signal 'terminated'"
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.634242462Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.635132157Z" level=info msg="Daemon shutdown complete"
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.635380455Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.635547254Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 15 18:25:27 ha-653100-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 15 18:25:27 ha-653100-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 15 18:25:27 ha-653100-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 15 18:25:27 ha-653100-m02 dockerd[1016]: time="2024-04-15T18:25:27.736568730Z" level=info msg="Starting up"
	Apr 15 18:26:27 ha-653100-m02 dockerd[1016]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 15 18:26:27 ha-653100-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 15 18:26:27 ha-653100-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 15 18:26:27 ha-653100-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0415 18:26:27.820189   10384 out.go:239] * 
	W0415 18:26:27.821891   10384 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 18:26:27.843940   10384 out.go:177] 
	
	
	==> Docker <==
	Apr 15 18:39:29 ha-653100 dockerd[1321]: 2024/04/15 18:39:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:39:29 ha-653100 dockerd[1321]: 2024/04/15 18:39:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:39:29 ha-653100 dockerd[1321]: 2024/04/15 18:39:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:39:29 ha-653100 dockerd[1321]: 2024/04/15 18:39:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:39:29 ha-653100 dockerd[1321]: 2024/04/15 18:39:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:39:29 ha-653100 dockerd[1321]: 2024/04/15 18:39:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:39:29 ha-653100 dockerd[1321]: 2024/04/15 18:39:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:39:30 ha-653100 dockerd[1321]: 2024/04/15 18:39:30 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:40:18 ha-653100 dockerd[1321]: 2024/04/15 18:40:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:40:18 ha-653100 dockerd[1321]: 2024/04/15 18:40:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:40:18 ha-653100 dockerd[1321]: 2024/04/15 18:40:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:40:18 ha-653100 dockerd[1321]: 2024/04/15 18:40:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:40:18 ha-653100 dockerd[1321]: 2024/04/15 18:40:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:40:18 ha-653100 dockerd[1321]: 2024/04/15 18:40:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:40:19 ha-653100 dockerd[1321]: 2024/04/15 18:40:19 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:40:19 ha-653100 dockerd[1321]: 2024/04/15 18:40:19 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:40:19 ha-653100 dockerd[1321]: 2024/04/15 18:40:19 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:45:04 ha-653100 dockerd[1321]: 2024/04/15 18:45:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:45:04 ha-653100 dockerd[1321]: 2024/04/15 18:45:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:45:05 ha-653100 dockerd[1321]: 2024/04/15 18:45:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:45:05 ha-653100 dockerd[1321]: 2024/04/15 18:45:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:45:05 ha-653100 dockerd[1321]: 2024/04/15 18:45:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:45:05 ha-653100 dockerd[1321]: 2024/04/15 18:45:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:45:05 ha-653100 dockerd[1321]: 2024/04/15 18:45:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:45:05 ha-653100 dockerd[1321]: 2024/04/15 18:45:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3810def19c30b       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Running             busybox                   0                   4ba88ccaba1a5       busybox-7fdf7869d9-5w5x4
	58d38dcc399d7       cbb01a7bd410d                                                                                         23 minutes ago      Running             coredns                   0                   66b040582e9fe       coredns-76f75df574-hz5n2
	7f2e95849717e       6e38f40d628db                                                                                         23 minutes ago      Running             storage-provisioner       0                   41946a72e3913       storage-provisioner
	79df4cc493ccd       cbb01a7bd410d                                                                                         23 minutes ago      Running             coredns                   0                   c2bc3be2dada4       coredns-76f75df574-sw766
	8533539a42fc8       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              23 minutes ago      Running             kindnet-cni               0                   840d4c720c681       kindnet-k8jt8
	ece5eb28b20be       a1d263b5dc5b0                                                                                         23 minutes ago      Running             kube-proxy                0                   590527a253a30       kube-proxy-dgh6m
	0cf5b602fc0c4       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     23 minutes ago      Running             kube-vip                  0                   71c70584ee9c6       kube-vip-ha-653100
	a0697c56404b8       6052a25da3f97                                                                                         23 minutes ago      Running             kube-controller-manager   0                   5c4190df9fb18       kube-controller-manager-ha-653100
	d68da55f0f382       8c390d98f50c0                                                                                         23 minutes ago      Running             kube-scheduler            0                   92e96b6d41bb2       kube-scheduler-ha-653100
	b7958fc0d30b8       39f995c9f1996                                                                                         23 minutes ago      Running             kube-apiserver            0                   a7b3e44514ced       kube-apiserver-ha-653100
	a0fa6c17de399       3861cfcd7c04c                                                                                         23 minutes ago      Running             etcd                      0                   65fe5df3a93dd       etcd-ha-653100
	
	
	==> coredns [58d38dcc399d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e2b9de1191510a72356755223f06623b152d8cdd72ea393cca47fb3d34a5414574050e77e521fd64fc84b7e18fcd0fb5ead79ecf0a5a8be221bd0ffeb8c0080c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45845 - 8967 "HINFO IN 8354542665525626293.2689365418710486320. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.045113649s
	[INFO] 10.244.0.4:51221 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.078581836s
	[INFO] 10.244.0.4:47875 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.52769764s
	[INFO] 10.244.0.4:52717 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000306601s
	[INFO] 10.244.0.4:39163 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.050987688s
	[INFO] 10.244.0.4:37816 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001543s
	[INFO] 10.244.0.4:60144 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.014447825s
	[INFO] 10.244.0.4:55552 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000204001s
	[INFO] 10.244.0.4:36177 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153901s
	[INFO] 10.244.0.4:46410 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000283001s
	[INFO] 10.244.0.4:57190 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168701s
	[INFO] 10.244.0.4:47185 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002385s
	[INFO] 10.244.0.4:34139 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001337s
	[INFO] 10.244.0.4:51029 - 5 "PTR IN 1.48.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000098701s
	
	
	==> coredns [79df4cc493cc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e2b9de1191510a72356755223f06623b152d8cdd72ea393cca47fb3d34a5414574050e77e521fd64fc84b7e18fcd0fb5ead79ecf0a5a8be221bd0ffeb8c0080c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57426 - 12156 "HINFO IN 2507889984284766848.6813386495577107890. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.33687907s
	[INFO] 10.244.0.4:40226 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000337201s
	[INFO] 10.244.0.4:56672 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.049146285s
	[INFO] 10.244.0.4:54337 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001723s
	[INFO] 10.244.0.4:58976 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0002015s
	[INFO] 10.244.0.4:41149 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00024s
	[INFO] 10.244.0.4:37438 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000310601s
	[INFO] 10.244.0.4:54099 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0002466s
	
	
	==> describe nodes <==
	Name:               ha-653100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-653100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c
	                    minikube.k8s.io/name=ha-653100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_15T18_22_25_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 18:22:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-653100
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Apr 2024 18:45:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Apr 2024 18:42:51 +0000   Mon, 15 Apr 2024 18:22:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Apr 2024 18:42:51 +0000   Mon, 15 Apr 2024 18:22:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Apr 2024 18:42:51 +0000   Mon, 15 Apr 2024 18:22:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Apr 2024 18:42:51 +0000   Mon, 15 Apr 2024 18:22:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.63.147
	  Hostname:    ha-653100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 d7ba8367096d4bf9b0e4541361a84287
	  System UUID:                64d5f641-1f2f-ce46-8918-a08d661c1258
	  Boot ID:                    994d41df-0ae9-4f39-ad28-f5e794182c63
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-5w5x4             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-76f75df574-hz5n2             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     23m
	  kube-system                 coredns-76f75df574-sw766             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     23m
	  kube-system                 etcd-ha-653100                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-k8jt8                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-apiserver-ha-653100             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-ha-653100    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-dgh6m                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-ha-653100             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-vip-ha-653100                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23m   kube-proxy       
	  Normal  Starting                 23m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23m   kubelet          Node ha-653100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m   kubelet          Node ha-653100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m   kubelet          Node ha-653100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           23m   node-controller  Node ha-653100 event: Registered Node ha-653100 in Controller
	  Normal  NodeReady                23m   kubelet          Node ha-653100 status is now: NodeReady
	
	
	Name:               ha-653100-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-653100-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c
	                    minikube.k8s.io/name=ha-653100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_15T18_43_42_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 18:43:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-653100-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Apr 2024 18:45:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Apr 2024 18:44:12 +0000   Mon, 15 Apr 2024 18:43:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Apr 2024 18:44:12 +0000   Mon, 15 Apr 2024 18:43:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Apr 2024 18:44:12 +0000   Mon, 15 Apr 2024 18:43:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Apr 2024 18:44:12 +0000   Mon, 15 Apr 2024 18:44:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.51.108
	  Hostname:    ha-653100-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 ceb376f540fe4419a1393b81dd4c70ec
	  System UUID:                316f69f2-57b1-1a4d-9808-3339f6c9e586
	  Boot ID:                    231d6308-8f63-4640-95e7-8ba95af6dfa1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-rtbf9       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m21s
	  kube-system                 kube-proxy-kvnct    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  2m21s (x2 over 2m21s)  kubelet          Node ha-653100-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m21s (x2 over 2m21s)  kubelet          Node ha-653100-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m21s (x2 over 2m21s)  kubelet          Node ha-653100-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m20s                  node-controller  Node ha-653100-m03 event: Registered Node ha-653100-m03 in Controller
	  Normal  NodeReady                2m                     kubelet          Node ha-653100-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.084698] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.376265] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr15 18:21] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.209937] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[ +33.615481] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.104388] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.615924] systemd-fstab-generator[983]: Ignoring "noauto" option for root device
	[  +0.216331] systemd-fstab-generator[995]: Ignoring "noauto" option for root device
	[  +0.260985] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[  +2.876807] systemd-fstab-generator[1179]: Ignoring "noauto" option for root device
	[  +0.212935] systemd-fstab-generator[1191]: Ignoring "noauto" option for root device
	[  +0.227831] systemd-fstab-generator[1203]: Ignoring "noauto" option for root device
	[  +0.311128] systemd-fstab-generator[1218]: Ignoring "noauto" option for root device
	[Apr15 18:22] systemd-fstab-generator[1312]: Ignoring "noauto" option for root device
	[  +0.114802] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.164512] systemd-fstab-generator[1517]: Ignoring "noauto" option for root device
	[  +7.677617] systemd-fstab-generator[1722]: Ignoring "noauto" option for root device
	[  +0.108322] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.774902] kauditd_printk_skb: 67 callbacks suppressed
	[  +5.244487] systemd-fstab-generator[2220]: Ignoring "noauto" option for root device
	[ +14.155639] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.279744] kauditd_printk_skb: 29 callbacks suppressed
	[Apr15 18:27] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [a0fa6c17de39] <==
	{"level":"info","ts":"2024-04-15T18:41:00.22731Z","caller":"traceutil/trace.go:171","msg":"trace[76664393] transaction","detail":"{read_only:false; response_revision:2419; number_of_response:1; }","duration":"105.483959ms","start":"2024-04-15T18:41:00.121805Z","end":"2024-04-15T18:41:00.227289Z","steps":["trace[76664393] 'process raft request'  (duration: 105.285059ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:41:01.418769Z","caller":"traceutil/trace.go:171","msg":"trace[8276609] transaction","detail":"{read_only:false; response_revision:2421; number_of_response:1; }","duration":"106.31566ms","start":"2024-04-15T18:41:01.312434Z","end":"2024-04-15T18:41:01.41875Z","steps":["trace[8276609] 'process raft request'  (duration: 106.038259ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:42:17.431757Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2021}
	{"level":"info","ts":"2024-04-15T18:42:17.443797Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2021,"took":"11.390416ms","hash":1421491769,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1802240,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-04-15T18:42:17.443837Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1421491769,"revision":2021,"compact-revision":1485}
	{"level":"info","ts":"2024-04-15T18:43:33.518295Z","caller":"traceutil/trace.go:171","msg":"trace[443010453] transaction","detail":"{read_only:false; response_revision:2695; number_of_response:1; }","duration":"266.440557ms","start":"2024-04-15T18:43:33.251827Z","end":"2024-04-15T18:43:33.518267Z","steps":["trace[443010453] 'process raft request'  (duration: 266.058657ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:34.010261Z","caller":"traceutil/trace.go:171","msg":"trace[907999705] linearizableReadLoop","detail":"{readStateIndex:2969; appliedIndex:2968; }","duration":"127.140571ms","start":"2024-04-15T18:43:33.882928Z","end":"2024-04-15T18:43:34.010069Z","steps":["trace[907999705] 'read index received'  (duration: 126.931871ms)","trace[907999705] 'applied index is now lower than readState.Index'  (duration: 208.1µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-15T18:43:34.010558Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.698771ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-15T18:43:34.01062Z","caller":"traceutil/trace.go:171","msg":"trace[1177634589] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2696; }","duration":"127.798872ms","start":"2024-04-15T18:43:33.882811Z","end":"2024-04-15T18:43:34.01061Z","steps":["trace[1177634589] 'agreement among raft nodes before linearized reading'  (duration: 127.591271ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:34.010373Z","caller":"traceutil/trace.go:171","msg":"trace[320563100] transaction","detail":"{read_only:false; response_revision:2696; number_of_response:1; }","duration":"232.738612ms","start":"2024-04-15T18:43:33.777617Z","end":"2024-04-15T18:43:34.010356Z","steps":["trace[320563100] 'process raft request'  (duration: 232.232111ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:34.712206Z","caller":"traceutil/trace.go:171","msg":"trace[711121958] transaction","detail":"{read_only:false; response_revision:2697; number_of_response:1; }","duration":"181.877144ms","start":"2024-04-15T18:43:34.530256Z","end":"2024-04-15T18:43:34.712133Z","steps":["trace[711121958] 'process raft request'  (duration: 181.582843ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:46.38138Z","caller":"traceutil/trace.go:171","msg":"trace[31759011] transaction","detail":"{read_only:false; response_revision:2751; number_of_response:1; }","duration":"240.29982ms","start":"2024-04-15T18:43:46.141059Z","end":"2024-04-15T18:43:46.381359Z","steps":["trace[31759011] 'process raft request'  (duration: 230.957808ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:52.283476Z","caller":"traceutil/trace.go:171","msg":"trace[1889997840] linearizableReadLoop","detail":"{readStateIndex:3044; appliedIndex:3043; }","duration":"110.447446ms","start":"2024-04-15T18:43:52.17301Z","end":"2024-04-15T18:43:52.283458Z","steps":["trace[1889997840] 'read index received'  (duration: 110.306146ms)","trace[1889997840] 'applied index is now lower than readState.Index'  (duration: 140.7µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-15T18:43:52.283605Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.572946ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/172.19.63.147\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-04-15T18:43:52.283637Z","caller":"traceutil/trace.go:171","msg":"trace[10951613] range","detail":"{range_begin:/registry/masterleases/172.19.63.147; range_end:; response_count:1; response_revision:2766; }","duration":"110.637847ms","start":"2024-04-15T18:43:52.17299Z","end":"2024-04-15T18:43:52.283628Z","steps":["trace[10951613] 'agreement among raft nodes before linearized reading'  (duration: 110.561546ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:52.283903Z","caller":"traceutil/trace.go:171","msg":"trace[1508396561] transaction","detail":"{read_only:false; response_revision:2766; number_of_response:1; }","duration":"114.807253ms","start":"2024-04-15T18:43:52.169084Z","end":"2024-04-15T18:43:52.283892Z","steps":["trace[1508396561] 'process raft request'  (duration: 114.280152ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T18:43:52.666427Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.266757ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14279945624074152814 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:462c8ee2fed56b6d>","response":"size:41"}
	{"level":"info","ts":"2024-04-15T18:43:52.667005Z","caller":"traceutil/trace.go:171","msg":"trace[1721457016] linearizableReadLoop","detail":"{readStateIndex:3045; appliedIndex:3044; }","duration":"237.167715ms","start":"2024-04-15T18:43:52.429394Z","end":"2024-04-15T18:43:52.666562Z","steps":["trace[1721457016] 'read index received'  (duration: 43.566658ms)","trace[1721457016] 'applied index is now lower than readState.Index'  (duration: 193.598957ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-15T18:43:52.667407Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T18:43:52.285771Z","time spent":"381.633207ms","remote":"127.0.0.1:45166","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2024-04-15T18:43:52.66813Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"230.156306ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1110"}
	{"level":"warn","ts":"2024-04-15T18:43:52.66875Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"239.352418ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-653100-m03\" ","response":"range_response_count:1 size:3120"}
	{"level":"info","ts":"2024-04-15T18:43:52.668806Z","caller":"traceutil/trace.go:171","msg":"trace[2016319950] range","detail":"{range_begin:/registry/minions/ha-653100-m03; range_end:; response_count:1; response_revision:2766; }","duration":"239.433618ms","start":"2024-04-15T18:43:52.429363Z","end":"2024-04-15T18:43:52.668797Z","steps":["trace[2016319950] 'agreement among raft nodes before linearized reading'  (duration: 239.350018ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:52.668276Z","caller":"traceutil/trace.go:171","msg":"trace[416735202] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2766; }","duration":"230.317306ms","start":"2024-04-15T18:43:52.437947Z","end":"2024-04-15T18:43:52.668265Z","steps":["trace[416735202] 'agreement among raft nodes before linearized reading'  (duration: 230.123406ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:52.788795Z","caller":"traceutil/trace.go:171","msg":"trace[396711083] transaction","detail":"{read_only:false; response_revision:2768; number_of_response:1; }","duration":"109.505445ms","start":"2024-04-15T18:43:52.679272Z","end":"2024-04-15T18:43:52.788777Z","steps":["trace[396711083] 'process raft request'  (duration: 102.216136ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:57.928969Z","caller":"traceutil/trace.go:171","msg":"trace[2037072140] transaction","detail":"{read_only:false; response_revision:2782; number_of_response:1; }","duration":"141.624188ms","start":"2024-04-15T18:43:57.787327Z","end":"2024-04-15T18:43:57.928951Z","steps":["trace[2037072140] 'process raft request'  (duration: 141.090687ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:46:02 up 26 min,  0 users,  load average: 0.16, 0.29, 0.27
	Linux ha-653100 5.10.207 #1 SMP Mon Apr 15 15:01:07 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8533539a42fc] <==
	I0415 18:44:57.667918       1 main.go:250] Node ha-653100-m03 has CIDR [10.244.1.0/24] 
	I0415 18:45:07.681239       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:45:07.681467       1 main.go:227] handling current node
	I0415 18:45:07.681612       1 main.go:223] Handling node with IPs: map[172.19.51.108:{}]
	I0415 18:45:07.681785       1 main.go:250] Node ha-653100-m03 has CIDR [10.244.1.0/24] 
	I0415 18:45:17.688870       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:45:17.688970       1 main.go:227] handling current node
	I0415 18:45:17.688984       1 main.go:223] Handling node with IPs: map[172.19.51.108:{}]
	I0415 18:45:17.688993       1 main.go:250] Node ha-653100-m03 has CIDR [10.244.1.0/24] 
	I0415 18:45:27.696831       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:45:27.696945       1 main.go:227] handling current node
	I0415 18:45:27.696962       1 main.go:223] Handling node with IPs: map[172.19.51.108:{}]
	I0415 18:45:27.696971       1 main.go:250] Node ha-653100-m03 has CIDR [10.244.1.0/24] 
	I0415 18:45:37.704150       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:45:37.704684       1 main.go:227] handling current node
	I0415 18:45:37.704757       1 main.go:223] Handling node with IPs: map[172.19.51.108:{}]
	I0415 18:45:37.704839       1 main.go:250] Node ha-653100-m03 has CIDR [10.244.1.0/24] 
	I0415 18:45:47.717804       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:45:47.717893       1 main.go:227] handling current node
	I0415 18:45:47.717909       1 main.go:223] Handling node with IPs: map[172.19.51.108:{}]
	I0415 18:45:47.717918       1 main.go:250] Node ha-653100-m03 has CIDR [10.244.1.0/24] 
	I0415 18:45:57.733848       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:45:57.733970       1 main.go:227] handling current node
	I0415 18:45:57.733987       1 main.go:223] Handling node with IPs: map[172.19.51.108:{}]
	I0415 18:45:57.733997       1 main.go:250] Node ha-653100-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [b7958fc0d30b] <==
	I0415 18:22:19.472339       1 autoregister_controller.go:141] Starting autoregister controller
	I0415 18:22:19.472452       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0415 18:22:19.472462       1 cache.go:39] Caches are synced for autoregister controller
	I0415 18:22:19.498049       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0415 18:22:19.510348       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0415 18:22:20.354035       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0415 18:22:20.363724       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0415 18:22:20.363838       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0415 18:22:21.763949       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0415 18:22:21.866542       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0415 18:22:22.100224       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0415 18:22:22.118571       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.63.147]
	I0415 18:22:22.120605       1 controller.go:624] quota admission added evaluator for: endpoints
	I0415 18:22:22.130952       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0415 18:22:22.385516       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0415 18:22:24.016138       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0415 18:22:24.048032       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0415 18:22:24.081226       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0415 18:22:36.868875       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0415 18:22:36.898745       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0415 18:43:52.764570       1 trace.go:236] Trace[705971869]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.19.63.147,type:*v1.Endpoints,resource:apiServerIPInfo (15-Apr-2024 18:43:52.172) (total time: 592ms):
	Trace[705971869]: ---"initial value restored" 112ms (18:43:52.284)
	Trace[705971869]: ---"Transaction prepared" 384ms (18:43:52.669)
	Trace[705971869]: ---"Txn call completed" 95ms (18:43:52.764)
	Trace[705971869]: [592.295387ms] [592.295387ms] END
	
	
	==> kube-controller-manager [a0697c56404b] <==
	I0415 18:22:50.009242       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="326.398µs"
	I0415 18:22:50.048064       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="90.899µs"
	I0415 18:22:51.764868       1 node_lifecycle_controller.go:1045] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0415 18:22:52.188891       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="82.302µs"
	I0415 18:22:52.287692       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="37.246165ms"
	I0415 18:22:52.288055       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="224.505µs"
	I0415 18:22:52.333123       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="22.028652ms"
	I0415 18:22:52.333675       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="426.909µs"
	I0415 18:27:05.738408       1 event.go:376] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7fdf7869d9 to 3"
	I0415 18:27:05.789870       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-5w5x4"
	I0415 18:27:05.841032       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-8pgjv"
	I0415 18:27:05.849328       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-tk6sh"
	I0415 18:27:05.899441       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="160.526833ms"
	I0415 18:27:05.957239       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="57.716604ms"
	I0415 18:27:05.998341       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="40.497733ms"
	I0415 18:27:05.998579       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="95.3µs"
	I0415 18:27:09.211983       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="14.949061ms"
	I0415 18:27:09.212464       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="29.3µs"
	I0415 18:43:41.144789       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-653100-m03\" does not exist"
	I0415 18:43:41.155868       1 range_allocator.go:380] "Set node PodCIDR" node="ha-653100-m03" podCIDRs=["10.244.1.0/24"]
	I0415 18:43:41.176203       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rtbf9"
	I0415 18:43:41.176231       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-kvnct"
	I0415 18:43:42.027348       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-653100-m03"
	I0415 18:43:42.028227       1 event.go:376] "Event occurred" object="ha-653100-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-653100-m03 event: Registered Node ha-653100-m03 in Controller"
	I0415 18:44:02.707914       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-653100-m03"
	
	
	==> kube-proxy [ece5eb28b20b] <==
	I0415 18:22:38.391716       1 server_others.go:72] "Using iptables proxy"
	I0415 18:22:38.407680       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.63.147"]
	I0415 18:22:38.495319       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0415 18:22:38.495346       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0415 18:22:38.495361       1 server_others.go:168] "Using iptables Proxier"
	I0415 18:22:38.500785       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0415 18:22:38.501443       1 server.go:865] "Version info" version="v1.29.3"
	I0415 18:22:38.501468       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0415 18:22:38.503945       1 config.go:188] "Starting service config controller"
	I0415 18:22:38.504041       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0415 18:22:38.504268       1 config.go:97] "Starting endpoint slice config controller"
	I0415 18:22:38.504770       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0415 18:22:38.505829       1 config.go:315] "Starting node config controller"
	I0415 18:22:38.507970       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0415 18:22:38.605316       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0415 18:22:38.605583       1 shared_informer.go:318] Caches are synced for service config
	I0415 18:22:38.608238       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [d68da55f0f38] <==
	W0415 18:22:20.533571       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0415 18:22:20.533671       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0415 18:22:20.559089       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0415 18:22:20.559148       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0415 18:22:20.566941       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0415 18:22:20.569271       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0415 18:22:20.649432       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0415 18:22:20.649545       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0415 18:22:20.680518       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0415 18:22:20.681133       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0415 18:22:20.703015       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0415 18:22:20.703474       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0415 18:22:20.766338       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0415 18:22:20.766458       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0415 18:22:20.789649       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0415 18:22:20.790593       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0415 18:22:20.803334       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0415 18:22:20.804054       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0415 18:22:20.808728       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0415 18:22:20.809130       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0415 18:22:20.838937       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0415 18:22:20.841219       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0415 18:22:20.865287       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0415 18:22:20.865345       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0415 18:22:22.187395       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 15 18:41:24 ha-653100 kubelet[2226]: E0415 18:41:24.244403    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:41:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:41:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:41:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:41:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 18:42:24 ha-653100 kubelet[2226]: E0415 18:42:24.245239    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:42:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:42:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:42:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:42:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 18:43:24 ha-653100 kubelet[2226]: E0415 18:43:24.244761    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:43:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:43:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:43:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:43:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 18:44:24 ha-653100 kubelet[2226]: E0415 18:44:24.244027    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:44:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:44:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:44:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:44:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 18:45:24 ha-653100 kubelet[2226]: E0415 18:45:24.245433    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:45:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:45:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:45:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:45:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:45:53.770966   11364 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-653100 -n ha-653100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-653100 -n ha-653100: (13.1008777s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-653100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7fdf7869d9-8pgjv busybox-7fdf7869d9-tk6sh
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-653100 describe pod busybox-7fdf7869d9-8pgjv busybox-7fdf7869d9-tk6sh
helpers_test.go:282: (dbg) kubectl --context ha-653100 describe pod busybox-7fdf7869d9-8pgjv busybox-7fdf7869d9-tk6sh:

                                                
                                                
-- stdout --
	Name:             busybox-7fdf7869d9-8pgjv
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7fdf7869d9
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7fdf7869d9
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c4hn5 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-c4hn5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  3m53s (x4 over 19m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	
	
	Name:             busybox-7fdf7869d9-tk6sh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7fdf7869d9
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7fdf7869d9
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rjshx (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-rjshx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  3m53s (x4 over 19m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterClusterStart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (56.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (75.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-653100 status --output json -v=7 --alsologtostderr
E0415 18:46:53.564060   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
ha_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-653100 status --output json -v=7 --alsologtostderr: exit status 2 (38.5628013s)

                                                
                                                
-- stdout --
	[{"Name":"ha-653100","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-653100-m02","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false},{"Name":"ha-653100-m03","Host":"Running","Kubelet":"Running","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:46:17.057457    2732 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0415 18:46:17.146883    2732 out.go:291] Setting OutFile to fd 924 ...
	I0415 18:46:17.147363    2732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:46:17.147363    2732 out.go:304] Setting ErrFile to fd 864...
	I0415 18:46:17.147363    2732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:46:17.164555    2732 out.go:298] Setting JSON to true
	I0415 18:46:17.164584    2732 mustload.go:65] Loading cluster: ha-653100
	I0415 18:46:17.164584    2732 notify.go:220] Checking for updates...
	I0415 18:46:17.165548    2732 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:46:17.165650    2732 status.go:255] checking status of ha-653100 ...
	I0415 18:46:17.166578    2732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:46:19.505903    2732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:46:19.505903    2732 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:46:19.505903    2732 status.go:330] ha-653100 host status = "Running" (err=<nil>)
	I0415 18:46:19.505903    2732 host.go:66] Checking if "ha-653100" exists ...
	I0415 18:46:19.506772    2732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:46:21.842500    2732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:46:21.842500    2732 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:46:21.842782    2732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:46:24.658258    2732 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:46:24.658258    2732 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:46:24.658258    2732 host.go:66] Checking if "ha-653100" exists ...
	I0415 18:46:24.674365    2732 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 18:46:24.674365    2732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:46:27.028009    2732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:46:27.028009    2732 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:46:27.028094    2732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:46:29.824117    2732 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:46:29.824312    2732 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:46:29.824312    2732 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:46:29.926577    2732 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.2521683s)
	I0415 18:46:29.942249    2732 ssh_runner.go:195] Run: systemctl --version
	I0415 18:46:29.968051    2732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 18:46:29.999291    2732 kubeconfig.go:125] found "ha-653100" server: "https://172.19.63.254:8443"
	I0415 18:46:29.999421    2732 api_server.go:166] Checking apiserver status ...
	I0415 18:46:30.013058    2732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 18:46:30.057050    2732 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2037/cgroup
	W0415 18:46:30.077428    2732 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2037/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0415 18:46:30.092172    2732 ssh_runner.go:195] Run: ls
	I0415 18:46:30.100741    2732 api_server.go:253] Checking apiserver healthz at https://172.19.63.254:8443/healthz ...
	I0415 18:46:30.113290    2732 api_server.go:279] https://172.19.63.254:8443/healthz returned 200:
	ok
	I0415 18:46:30.113290    2732 status.go:422] ha-653100 apiserver status = Running (err=<nil>)
	I0415 18:46:30.113290    2732 status.go:257] ha-653100 status: &{Name:ha-653100 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 18:46:30.113290    2732 status.go:255] checking status of ha-653100-m02 ...
	I0415 18:46:30.114325    2732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:46:32.419732    2732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:46:32.420313    2732 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:46:32.420313    2732 status.go:330] ha-653100-m02 host status = "Running" (err=<nil>)
	I0415 18:46:32.420313    2732 host.go:66] Checking if "ha-653100-m02" exists ...
	I0415 18:46:32.421084    2732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:46:34.756277    2732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:46:34.756277    2732 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:46:34.756830    2732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:46:37.553741    2732 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:46:37.553741    2732 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:46:37.554471    2732 host.go:66] Checking if "ha-653100-m02" exists ...
	I0415 18:46:37.569436    2732 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 18:46:37.569436    2732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:46:39.834824    2732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:46:39.834824    2732 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:46:39.835076    2732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:46:42.598605    2732 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:46:42.598605    2732 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:46:42.598605    2732 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:46:42.705893    2732 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.1363253s)
	I0415 18:46:42.720730    2732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 18:46:42.750459    2732 kubeconfig.go:125] found "ha-653100" server: "https://172.19.63.254:8443"
	I0415 18:46:42.750459    2732 api_server.go:166] Checking apiserver status ...
	I0415 18:46:42.765467    2732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0415 18:46:42.793308    2732 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0415 18:46:42.793462    2732 status.go:422] ha-653100-m02 apiserver status = Stopped (err=<nil>)
	I0415 18:46:42.793462    2732 status.go:257] ha-653100-m02 status: &{Name:ha-653100-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 18:46:42.793462    2732 status.go:255] checking status of ha-653100-m03 ...
	I0415 18:46:42.794108    2732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m03 ).state
	I0415 18:46:45.106529    2732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:46:45.106958    2732 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:46:45.106958    2732 status.go:330] ha-653100-m03 host status = "Running" (err=<nil>)
	I0415 18:46:45.106958    2732 host.go:66] Checking if "ha-653100-m03" exists ...
	I0415 18:46:45.107789    2732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m03 ).state
	I0415 18:46:47.447476    2732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:46:47.447476    2732 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:46:47.447476    2732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m03 ).networkadapters[0]).ipaddresses[0]
	I0415 18:46:50.208628    2732 main.go:141] libmachine: [stdout =====>] : 172.19.51.108
	
	I0415 18:46:50.208628    2732 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:46:50.209286    2732 host.go:66] Checking if "ha-653100-m03" exists ...
	I0415 18:46:50.224560    2732 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 18:46:50.224560    2732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m03 ).state
	I0415 18:46:52.517624    2732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:46:52.517624    2732 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:46:52.517751    2732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m03 ).networkadapters[0]).ipaddresses[0]
	I0415 18:46:55.318228    2732 main.go:141] libmachine: [stdout =====>] : 172.19.51.108
	
	I0415 18:46:55.318228    2732 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:46:55.320042    2732 sshutil.go:53] new ssh client: &{IP:172.19.51.108 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m03\id_rsa Username:docker}
	I0415 18:46:55.415257    2732 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.1906551s)
	I0415 18:46:55.432598    2732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 18:46:55.459742    2732 status.go:257] ha-653100-m03 status: &{Name:ha-653100-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:328: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-653100 status --output json -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-653100 -n ha-653100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-653100 -n ha-653100: (13.21836s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-653100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-653100 logs -n 25: (9.3681394s)
helpers_test.go:252: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:38 UTC | 15 Apr 24 18:38 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:38 UTC | 15 Apr 24 18:38 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | busybox-7fdf7869d9-5w5x4 --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-8pgjv --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-tk6sh --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | busybox-7fdf7869d9-5w5x4 --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-8pgjv --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-tk6sh --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | busybox-7fdf7869d9-5w5x4 -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-8pgjv -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-tk6sh -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | busybox-7fdf7869d9-5w5x4             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-5w5x4 -- sh       |           |                   |                |                     |                     |
	|         | -c ping -c 1 172.19.48.1             |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-8pgjv             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-tk6sh             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| node    | add -p ha-653100 -v=7                | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:40 UTC | 15 Apr 24 18:44 UTC |
	|         | --alsologtostderr                    |           |                   |                |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 18:19:03
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 18:19:03.428900   10384 out.go:291] Setting OutFile to fd 956 ...
	I0415 18:19:03.429535   10384 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:19:03.429535   10384 out.go:304] Setting ErrFile to fd 892...
	I0415 18:19:03.429535   10384 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:19:03.456152   10384 out.go:298] Setting JSON to false
	I0415 18:19:03.460969   10384 start.go:129] hostinfo: {"hostname":"minikube6","uptime":16870,"bootTime":1713188273,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0415 18:19:03.460969   10384 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 18:19:03.468944   10384 out.go:177] * [ha-653100] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0415 18:19:03.471713   10384 notify.go:220] Checking for updates...
	I0415 18:19:03.474175   10384 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 18:19:03.479852   10384 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 18:19:03.482821   10384 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0415 18:19:03.485193   10384 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 18:19:03.488098   10384 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 18:19:03.491472   10384 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 18:19:09.177227   10384 out.go:177] * Using the hyperv driver based on user configuration
	I0415 18:19:09.180711   10384 start.go:297] selected driver: hyperv
	I0415 18:19:09.180711   10384 start.go:901] validating driver "hyperv" against <nil>
	I0415 18:19:09.180711   10384 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 18:19:09.231415   10384 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 18:19:09.233116   10384 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 18:19:09.233296   10384 cni.go:84] Creating CNI manager for ""
	I0415 18:19:09.233296   10384 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0415 18:19:09.233296   10384 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0415 18:19:09.233503   10384 start.go:340] cluster config:
	{Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-653100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 18:19:09.233896   10384 iso.go:125] acquiring lock: {Name:mkb11aac800c033551a31c7a773c0461f92e4459 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 18:19:09.237716   10384 out.go:177] * Starting "ha-653100" primary control-plane node in "ha-653100" cluster
	I0415 18:19:09.241624   10384 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:19:09.241887   10384 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 18:19:09.241939   10384 cache.go:56] Caching tarball of preloaded images
	I0415 18:19:09.242318   10384 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 18:19:09.242373   10384 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 18:19:09.243280   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:19:09.243280   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json: {Name:mk9fcf3e86096a1c3d878c2c5f55d5a5acd00e38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:19:09.244971   10384 start.go:360] acquireMachinesLock for ha-653100: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 18:19:09.244971   10384 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-653100"
	I0415 18:19:09.244971   10384 start.go:93] Provisioning new machine with config: &{Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-653100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 18:19:09.244971   10384 start.go:125] createHost starting for "" (driver="hyperv")
	I0415 18:19:09.247899   10384 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 18:19:09.247899   10384 start.go:159] libmachine.API.Create for "ha-653100" (driver="hyperv")
	I0415 18:19:09.247899   10384 client.go:168] LocalClient.Create starting
	I0415 18:19:09.248830   10384 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0415 18:19:09.249101   10384 main.go:141] libmachine: Decoding PEM data...
	I0415 18:19:09.249148   10384 main.go:141] libmachine: Parsing certificate...
	I0415 18:19:09.249148   10384 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0415 18:19:09.249148   10384 main.go:141] libmachine: Decoding PEM data...
	I0415 18:19:09.249148   10384 main.go:141] libmachine: Parsing certificate...
	I0415 18:19:09.249731   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0415 18:19:11.419777   10384 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0415 18:19:11.419777   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:11.420812   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0415 18:19:13.280108   10384 main.go:141] libmachine: [stdout =====>] : False
	
	I0415 18:19:13.280108   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:13.280637   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 18:19:14.855241   10384 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 18:19:14.855241   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:14.855241   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 18:19:18.733923   10384 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 18:19:18.734210   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:18.736243   10384 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0415 18:19:19.289879   10384 main.go:141] libmachine: Creating SSH key...
	I0415 18:19:19.400622   10384 main.go:141] libmachine: Creating VM...
	I0415 18:19:19.400622   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 18:19:22.473592   10384 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 18:19:22.473592   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:22.473592   10384 main.go:141] libmachine: Using switch "Default Switch"
	I0415 18:19:22.473592   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 18:19:24.358372   10384 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 18:19:24.358573   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:24.358573   10384 main.go:141] libmachine: Creating VHD
	I0415 18:19:24.358573   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\fixed.vhd' -SizeBytes 10MB -Fixed
	I0415 18:19:28.369440   10384 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 650E0F4D-34EC-4EE4-B011-F395B7FC2B3C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0415 18:19:28.369525   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:28.369525   10384 main.go:141] libmachine: Writing magic tar header
	I0415 18:19:28.369609   10384 main.go:141] libmachine: Writing SSH key tar header
	I0415 18:19:28.379115   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\disk.vhd' -VHDType Dynamic -DeleteSource
	I0415 18:19:31.701668   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:31.701668   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:31.702065   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\disk.vhd' -SizeBytes 20000MB
	I0415 18:19:34.409230   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:34.409287   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:34.409287   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-653100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0415 18:19:38.391213   10384 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-653100 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0415 18:19:38.391365   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:38.391448   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-653100 -DynamicMemoryEnabled $false
	I0415 18:19:40.850920   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:40.851446   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:40.851446   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-653100 -Count 2
	I0415 18:19:43.184748   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:43.185230   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:43.185314   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-653100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\boot2docker.iso'
	I0415 18:19:45.947867   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:45.947867   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:45.948906   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-653100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\disk.vhd'
	I0415 18:19:48.807697   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:48.807697   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:48.807697   10384 main.go:141] libmachine: Starting VM...
	I0415 18:19:48.808056   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-653100
	I0415 18:19:52.116173   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:52.117205   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:52.117205   10384 main.go:141] libmachine: Waiting for host to start...
	I0415 18:19:52.117276   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:19:54.557809   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:19:54.558376   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:54.558452   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:19:57.250722   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:57.250722   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:58.258291   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:00.584210   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:00.584210   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:00.584448   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:03.246620   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:20:03.247582   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:04.255962   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:06.600399   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:06.600459   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:06.600459   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:09.316612   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:20:09.316612   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:10.317022   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:12.741666   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:12.741972   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:12.742046   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:15.418020   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:20:15.418020   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:16.427460   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:18.790469   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:18.790783   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:18.790783   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:21.596566   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:21.596566   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:21.597345   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:23.951579   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:23.951579   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:23.951579   10384 machine.go:94] provisionDockerMachine start ...
	I0415 18:20:23.952606   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:26.247912   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:26.248135   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:26.248135   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:29.012297   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:29.012297   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:29.019039   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:20:29.032591   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:20:29.032673   10384 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 18:20:29.165965   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0415 18:20:29.165965   10384 buildroot.go:166] provisioning hostname "ha-653100"
	I0415 18:20:29.165965   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:31.462885   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:31.462885   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:31.462973   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:34.155427   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:34.156301   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:34.162944   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:20:34.163526   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:20:34.163526   10384 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-653100 && echo "ha-653100" | sudo tee /etc/hostname
	I0415 18:20:34.337418   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-653100
	
	I0415 18:20:34.337418   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:36.655518   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:36.655518   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:36.655812   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:39.380784   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:39.380784   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:39.389453   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:20:39.390401   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:20:39.390401   10384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-653100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-653100/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-653100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 18:20:39.543028   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 18:20:39.543028   10384 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0415 18:20:39.543028   10384 buildroot.go:174] setting up certificates
	I0415 18:20:39.543028   10384 provision.go:84] configureAuth start
	I0415 18:20:39.543611   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:41.851405   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:41.851611   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:41.851695   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:44.624640   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:44.624640   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:44.624640   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:46.878650   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:46.878650   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:46.879166   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:49.633681   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:49.633926   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:49.633926   10384 provision.go:143] copyHostCerts
	I0415 18:20:49.633926   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0415 18:20:49.634462   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0415 18:20:49.634462   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0415 18:20:49.635297   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0415 18:20:49.637549   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0415 18:20:49.637813   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0415 18:20:49.637813   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0415 18:20:49.637813   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0415 18:20:49.639233   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0415 18:20:49.639233   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0415 18:20:49.639233   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0415 18:20:49.639935   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0415 18:20:49.640957   10384 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-653100 san=[127.0.0.1 172.19.63.147 ha-653100 localhost minikube]
	I0415 18:20:49.905880   10384 provision.go:177] copyRemoteCerts
	I0415 18:20:49.922553   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 18:20:49.922553   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:52.259882   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:52.259882   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:52.259882   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:54.984473   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:54.984473   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:54.984987   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:20:55.101879   10384 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1791462s)
	I0415 18:20:55.101879   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0415 18:20:55.102059   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 18:20:55.153442   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0415 18:20:55.153917   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0415 18:20:55.199876   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0415 18:20:55.200448   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0415 18:20:55.254511   10384 provision.go:87] duration metric: took 15.7112643s to configureAuth
	I0415 18:20:55.254511   10384 buildroot.go:189] setting minikube options for container-runtime
	I0415 18:20:55.255352   10384 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:20:55.255474   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:57.547699   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:57.547699   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:57.547786   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:00.303241   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:00.303241   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:00.309852   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:00.310680   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:00.310680   10384 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 18:21:00.455641   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0415 18:21:00.455641   10384 buildroot.go:70] root file system type: tmpfs
	I0415 18:21:00.455641   10384 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 18:21:00.455641   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:02.740065   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:02.740065   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:02.740841   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:05.487209   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:05.487209   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:05.492437   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:05.493558   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:05.493558   10384 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 18:21:05.663243   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 18:21:05.663359   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:07.945804   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:07.946031   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:07.946031   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:10.668442   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:10.668442   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:10.674981   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:10.675100   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:10.675100   10384 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 18:21:12.959357   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0415 18:21:12.959357   10384 machine.go:97] duration metric: took 49.0073804s to provisionDockerMachine
	I0415 18:21:12.959357   10384 client.go:171] duration metric: took 2m3.7104605s to LocalClient.Create
	I0415 18:21:12.959357   10384 start.go:167] duration metric: took 2m3.7104605s to libmachine.API.Create "ha-653100"
	I0415 18:21:12.959357   10384 start.go:293] postStartSetup for "ha-653100" (driver="hyperv")
	I0415 18:21:12.959357   10384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 18:21:12.974666   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 18:21:12.974666   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:15.275980   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:15.275980   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:15.275980   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:18.019740   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:18.019762   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:18.019878   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:21:18.139960   10384 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1652527s)
	I0415 18:21:18.155380   10384 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 18:21:18.164559   10384 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 18:21:18.164559   10384 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0415 18:21:18.165434   10384 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0415 18:21:18.166112   10384 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> 112722.pem in /etc/ssl/certs
	I0415 18:21:18.166112   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /etc/ssl/certs/112722.pem
	I0415 18:21:18.180084   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 18:21:18.200844   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /etc/ssl/certs/112722.pem (1708 bytes)
	I0415 18:21:18.250132   10384 start.go:296] duration metric: took 5.2907331s for postStartSetup
	I0415 18:21:18.253937   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:20.531894   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:20.531894   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:20.531894   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:23.259067   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:23.259480   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:23.259754   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:21:23.262894   10384 start.go:128] duration metric: took 2m14.0167978s to createHost
	I0415 18:21:23.262950   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:25.573334   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:25.573334   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:25.573334   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:28.294984   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:28.295213   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:28.304032   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:28.304955   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:28.304955   10384 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0415 18:21:28.441121   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713205288.448859419
	
	I0415 18:21:28.441191   10384 fix.go:216] guest clock: 1713205288.448859419
	I0415 18:21:28.441191   10384 fix.go:229] Guest: 2024-04-15 18:21:28.448859419 +0000 UTC Remote: 2024-04-15 18:21:23.2629505 +0000 UTC m=+140.027670501 (delta=5.185908919s)
	I0415 18:21:28.441272   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:30.726887   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:30.727164   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:30.727164   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:33.517730   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:33.518861   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:33.525281   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:33.525856   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:33.525856   10384 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713205288
	I0415 18:21:33.684173   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 15 18:21:28 UTC 2024
	
	I0415 18:21:33.684173   10384 fix.go:236] clock set: Mon Apr 15 18:21:28 UTC 2024
	 (err=<nil>)
	I0415 18:21:33.684173   10384 start.go:83] releasing machines lock for "ha-653100", held for 2m24.4380391s
	I0415 18:21:33.684173   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:35.959004   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:35.959004   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:35.959004   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:38.693038   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:38.693586   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:38.698246   10384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 18:21:38.698432   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:38.709918   10384 ssh_runner.go:195] Run: cat /version.json
	I0415 18:21:38.709918   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:41.102451   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:43.920818   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:43.920972   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:43.921214   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:21:43.967273   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:43.967331   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:43.967331   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:21:44.091517   10384 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3921768s)
	I0415 18:21:44.091595   10384 ssh_runner.go:235] Completed: cat /version.json: (5.3815555s)
	I0415 18:21:44.105965   10384 ssh_runner.go:195] Run: systemctl --version
	I0415 18:21:44.128397   10384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0415 18:21:44.135680   10384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 18:21:44.149066   10384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 18:21:44.177790   10384 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 18:21:44.177790   10384 start.go:494] detecting cgroup driver to use...
	I0415 18:21:44.177790   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:21:44.228163   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0415 18:21:44.262529   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 18:21:44.285370   10384 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 18:21:44.301154   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 18:21:44.336472   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:21:44.370998   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 18:21:44.404889   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:21:44.438672   10384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 18:21:44.473968   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 18:21:44.507568   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 18:21:44.541278   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 18:21:44.574748   10384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 18:21:44.615798   10384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 18:21:44.656765   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:21:44.866329   10384 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 18:21:44.902355   10384 start.go:494] detecting cgroup driver to use...
	I0415 18:21:44.917364   10384 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0415 18:21:44.958576   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:21:44.995083   10384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 18:21:45.045436   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:21:45.084274   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:21:45.126708   10384 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0415 18:21:45.197837   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:21:45.224449   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:21:45.274212   10384 ssh_runner.go:195] Run: which cri-dockerd
	I0415 18:21:45.295670   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0415 18:21:45.317816   10384 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0415 18:21:45.364867   10384 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0415 18:21:45.594504   10384 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0415 18:21:45.794998   10384 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0415 18:21:45.795406   10384 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0415 18:21:45.851288   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:21:46.067106   10384 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 18:21:48.625712   10384 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5574711s)
	I0415 18:21:48.640151   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0415 18:21:48.681058   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 18:21:48.721545   10384 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0415 18:21:48.945328   10384 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0415 18:21:49.172462   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:21:49.400402   10384 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0415 18:21:49.448539   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 18:21:49.489496   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:21:49.703253   10384 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0415 18:21:49.816658   10384 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0415 18:21:49.830904   10384 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0415 18:21:49.840743   10384 start.go:562] Will wait 60s for crictl version
	I0415 18:21:49.855288   10384 ssh_runner.go:195] Run: which crictl
	I0415 18:21:49.875869   10384 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0415 18:21:49.936713   10384 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0415 18:21:49.947981   10384 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 18:21:49.993965   10384 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 18:21:50.032420   10384 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0415 18:21:50.032553   10384 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0415 18:21:50.037021   10384 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0415 18:21:50.037021   10384 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0415 18:21:50.037021   10384 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0415 18:21:50.037021   10384 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:d7:0e Flags:up|broadcast|multicast|running}
	I0415 18:21:50.039971   10384 ip.go:210] interface addr: fe80::6b0:6318:bc6e:fcda/64
	I0415 18:21:50.039971   10384 ip.go:210] interface addr: 172.19.48.1/20
	I0415 18:21:50.056064   10384 ssh_runner.go:195] Run: grep 172.19.48.1	host.minikube.internal$ /etc/hosts
	I0415 18:21:50.062649   10384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 18:21:50.097930   10384 kubeadm.go:877] updating cluster {Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:ha-653100 Namespace:default APIServerHAVIP:172.19.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.63.147 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0415 18:21:50.097930   10384 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:21:50.108473   10384 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 18:21:50.131644   10384 docker.go:685] Got preloaded images: 
	I0415 18:21:50.132600   10384 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0415 18:21:50.146104   10384 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0415 18:21:50.181885   10384 ssh_runner.go:195] Run: which lz4
	I0415 18:21:50.188111   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0415 18:21:50.202072   10384 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0415 18:21:50.209107   10384 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0415 18:21:50.209107   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0415 18:21:52.413614   10384 docker.go:649] duration metric: took 2.2254854s to copy over tarball
	I0415 18:21:52.429279   10384 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0415 18:22:01.379987   10384 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.9504893s)
	I0415 18:22:01.379987   10384 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0415 18:22:01.455511   10384 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0415 18:22:01.477182   10384 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0415 18:22:01.536289   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:22:01.768214   10384 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 18:22:04.398301   10384 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6300657s)
	I0415 18:22:04.408551   10384 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 18:22:04.433417   10384 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0415 18:22:04.433417   10384 cache_images.go:84] Images are preloaded, skipping loading
	I0415 18:22:04.433417   10384 kubeadm.go:928] updating node { 172.19.63.147 8443 v1.29.3 docker true true} ...
	I0415 18:22:04.433417   10384 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-653100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.63.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-653100 Namespace:default APIServerHAVIP:172.19.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0415 18:22:04.444220   10384 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0415 18:22:04.490342   10384 cni.go:84] Creating CNI manager for ""
	I0415 18:22:04.490402   10384 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 18:22:04.490472   10384 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0415 18:22:04.490526   10384 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.63.147 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-653100 NodeName:ha-653100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.63.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.63.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0415 18:22:04.490735   10384 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.63.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-653100"
	  kubeletExtraArgs:
	    node-ip: 172.19.63.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.63.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0415 18:22:04.490884   10384 kube-vip.go:111] generating kube-vip config ...
	I0415 18:22:04.505496   10384 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0415 18:22:04.536495   10384 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0415 18:22:04.536752   10384 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.63.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0415 18:22:04.551207   10384 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0415 18:22:04.567905   10384 binaries.go:44] Found k8s binaries, skipping transfer
	I0415 18:22:04.582348   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0415 18:22:04.604171   10384 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0415 18:22:04.646000   10384 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0415 18:22:04.692832   10384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0415 18:22:04.728604   10384 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1351 bytes)
	I0415 18:22:04.775922   10384 ssh_runner.go:195] Run: grep 172.19.63.254	control-plane.minikube.internal$ /etc/hosts
	I0415 18:22:04.783742   10384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.63.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 18:22:04.822733   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:22:05.055746   10384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 18:22:05.087598   10384 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100 for IP: 172.19.63.147
	I0415 18:22:05.087652   10384 certs.go:194] generating shared ca certs ...
	I0415 18:22:05.087652   10384 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.088303   10384 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0415 18:22:05.088303   10384 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0415 18:22:05.088915   10384 certs.go:256] generating profile certs ...
	I0415 18:22:05.089546   10384 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.key
	I0415 18:22:05.089739   10384 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.crt with IP's: []
	I0415 18:22:05.327013   10384 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.crt ...
	I0415 18:22:05.328010   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.crt: {Name:mka413e653e113856769234a348385e515e46303 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.329372   10384 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.key ...
	I0415 18:22:05.329372   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.key: {Name:mk12a79d6acd7fec5ddd98754bb23ab16e83b4f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.330112   10384 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key.c151ec5c
	I0415 18:22:05.331447   10384 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt.c151ec5c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.63.147 172.19.63.254]
	I0415 18:22:05.565428   10384 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt.c151ec5c ...
	I0415 18:22:05.565428   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt.c151ec5c: {Name:mk5c523ee813d33697660e99fb5da48b385701b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.567434   10384 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key.c151ec5c ...
	I0415 18:22:05.567434   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key.c151ec5c: {Name:mkeadeed87d8879714bf8100a4229bec1246f570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.568511   10384 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt.c151ec5c -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt
	I0415 18:22:05.585425   10384 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key.c151ec5c -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key
	I0415 18:22:05.586963   10384 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key
	I0415 18:22:05.587129   10384 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt with IP's: []
	I0415 18:22:05.748042   10384 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt ...
	I0415 18:22:05.749020   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt: {Name:mk92c7defdccaf790f51e1080d3836b064a3ba9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.749736   10384 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key ...
	I0415 18:22:05.749736   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key: {Name:mk071663552007da34f935841f25d643d746d544 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.751078   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0415 18:22:05.751078   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0415 18:22:05.751078   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0415 18:22:05.751078   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0415 18:22:05.752108   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0415 18:22:05.752265   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0415 18:22:05.752517   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0415 18:22:05.761320   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0415 18:22:05.761625   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem (1338 bytes)
	W0415 18:22:05.762397   10384 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272_empty.pem, impossibly tiny 0 bytes
	I0415 18:22:05.762397   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0415 18:22:05.762397   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0415 18:22:05.763315   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0415 18:22:05.763315   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0415 18:22:05.764136   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem (1708 bytes)
	I0415 18:22:05.764433   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /usr/share/ca-certificates/112722.pem
	I0415 18:22:05.764684   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:22:05.764840   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem -> /usr/share/ca-certificates/11272.pem
	I0415 18:22:05.766228   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0415 18:22:05.818285   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0415 18:22:05.869100   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0415 18:22:05.927943   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0415 18:22:05.982236   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0415 18:22:06.033436   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0415 18:22:06.088918   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0415 18:22:06.140228   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0415 18:22:06.194914   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /usr/share/ca-certificates/112722.pem (1708 bytes)
	I0415 18:22:06.244585   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0415 18:22:06.295695   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem --> /usr/share/ca-certificates/11272.pem (1338 bytes)
	I0415 18:22:06.348962   10384 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0415 18:22:06.398272   10384 ssh_runner.go:195] Run: openssl version
	I0415 18:22:06.422630   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112722.pem && ln -fs /usr/share/ca-certificates/112722.pem /etc/ssl/certs/112722.pem"
	I0415 18:22:06.459842   10384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112722.pem
	I0415 18:22:06.467290   10384 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 17:58 /usr/share/ca-certificates/112722.pem
	I0415 18:22:06.480612   10384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112722.pem
	I0415 18:22:06.503535   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112722.pem /etc/ssl/certs/3ec20f2e.0"
	I0415 18:22:06.538561   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0415 18:22:06.572574   10384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:22:06.580950   10384 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:22:06.595127   10384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:22:06.618634   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0415 18:22:06.655478   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11272.pem && ln -fs /usr/share/ca-certificates/11272.pem /etc/ssl/certs/11272.pem"
	I0415 18:22:06.690402   10384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11272.pem
	I0415 18:22:06.698649   10384 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 17:58 /usr/share/ca-certificates/11272.pem
	I0415 18:22:06.712709   10384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11272.pem
	I0415 18:22:06.735899   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11272.pem /etc/ssl/certs/51391683.0"
	I0415 18:22:06.771243   10384 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 18:22:06.778754   10384 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0415 18:22:06.779215   10384 kubeadm.go:391] StartCluster: {Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clu
sterName:ha-653100 Namespace:default APIServerHAVIP:172.19.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.63.147 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 18:22:06.790653   10384 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0415 18:22:06.830974   10384 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0415 18:22:06.866829   10384 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0415 18:22:06.900593   10384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0415 18:22:06.925579   10384 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0415 18:22:06.925579   10384 kubeadm.go:156] found existing configuration files:
	
	I0415 18:22:06.940209   10384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0415 18:22:06.959148   10384 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0415 18:22:06.975145   10384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0415 18:22:07.014822   10384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0415 18:22:07.031944   10384 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0415 18:22:07.045919   10384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0415 18:22:07.081479   10384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0415 18:22:07.104063   10384 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0415 18:22:07.117753   10384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0415 18:22:07.151118   10384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0415 18:22:07.171678   10384 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0415 18:22:07.187200   10384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0415 18:22:07.206408   10384 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0415 18:22:07.712971   10384 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0415 18:22:24.172226   10384 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0415 18:22:24.172397   10384 kubeadm.go:309] [preflight] Running pre-flight checks
	I0415 18:22:24.172431   10384 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0415 18:22:24.172431   10384 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0415 18:22:24.172431   10384 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0415 18:22:24.173023   10384 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0415 18:22:24.177821   10384 out.go:204]   - Generating certificates and keys ...
	I0415 18:22:24.178357   10384 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0415 18:22:24.178482   10384 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-653100 localhost] and IPs [172.19.63.147 127.0.0.1 ::1]
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-653100 localhost] and IPs [172.19.63.147 127.0.0.1 ::1]
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0415 18:22:24.180240   10384 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0415 18:22:24.186302   10384 out.go:204]   - Booting up control plane ...
	I0415 18:22:24.187251   10384 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0415 18:22:24.188243   10384 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0415 18:22:24.188243   10384 kubeadm.go:309] [apiclient] All control plane components are healthy after 9.567962 seconds
	I0415 18:22:24.188243   10384 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0415 18:22:24.188243   10384 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0415 18:22:24.188243   10384 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0415 18:22:24.189243   10384 kubeadm.go:309] [mark-control-plane] Marking the node ha-653100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0415 18:22:24.189243   10384 kubeadm.go:309] [bootstrap-token] Using token: huvy89.hhqbdqsl75p9l7b4
	I0415 18:22:24.194248   10384 out.go:204]   - Configuring RBAC rules ...
	I0415 18:22:24.194248   10384 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0415 18:22:24.194248   10384 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0415 18:22:24.194248   10384 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0415 18:22:24.195682   10384 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0415 18:22:24.195682   10384 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0415 18:22:24.195682   10384 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0415 18:22:24.196372   10384 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0415 18:22:24.196724   10384 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0415 18:22:24.196838   10384 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0415 18:22:24.196838   10384 kubeadm.go:309] 
	I0415 18:22:24.196838   10384 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0415 18:22:24.197084   10384 kubeadm.go:309] 
	I0415 18:22:24.197248   10384 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0415 18:22:24.197248   10384 kubeadm.go:309] 
	I0415 18:22:24.197248   10384 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0415 18:22:24.197432   10384 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0415 18:22:24.197611   10384 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0415 18:22:24.197611   10384 kubeadm.go:309] 
	I0415 18:22:24.197611   10384 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0415 18:22:24.197611   10384 kubeadm.go:309] 
	I0415 18:22:24.197611   10384 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0415 18:22:24.197611   10384 kubeadm.go:309] 
	I0415 18:22:24.197611   10384 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0415 18:22:24.197611   10384 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0415 18:22:24.197611   10384 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0415 18:22:24.198307   10384 kubeadm.go:309] 
	I0415 18:22:24.198307   10384 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0415 18:22:24.198307   10384 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0415 18:22:24.198307   10384 kubeadm.go:309] 
	I0415 18:22:24.198307   10384 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token huvy89.hhqbdqsl75p9l7b4 \
	I0415 18:22:24.198307   10384 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:64433043496e1f3289bbcb278fdd01bdcea4f93c3a13feee16fa4e3db23f59c2 \
	I0415 18:22:24.198307   10384 kubeadm.go:309] 	--control-plane 
	I0415 18:22:24.198307   10384 kubeadm.go:309] 
	I0415 18:22:24.198307   10384 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0415 18:22:24.198307   10384 kubeadm.go:309] 
	I0415 18:22:24.199302   10384 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token huvy89.hhqbdqsl75p9l7b4 \
	I0415 18:22:24.199302   10384 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:64433043496e1f3289bbcb278fdd01bdcea4f93c3a13feee16fa4e3db23f59c2 
	I0415 18:22:24.199302   10384 cni.go:84] Creating CNI manager for ""
	I0415 18:22:24.199302   10384 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 18:22:24.203263   10384 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0415 18:22:24.221247   10384 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0415 18:22:24.229824   10384 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0415 18:22:24.229824   10384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0415 18:22:24.323407   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0415 18:22:25.047319   10384 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0415 18:22:25.062350   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-653100 minikube.k8s.io/updated_at=2024_04_15T18_22_25_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c minikube.k8s.io/name=ha-653100 minikube.k8s.io/primary=true
	I0415 18:22:25.062350   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:25.070326   10384 ops.go:34] apiserver oom_adj: -16
	I0415 18:22:25.284655   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:25.790456   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:26.293504   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:26.795443   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:27.298654   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:27.786190   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:28.286860   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:28.788050   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:29.292845   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:29.794080   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:30.300169   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:30.788471   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:31.295339   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:31.798627   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:32.299958   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:32.791784   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:33.289567   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:33.791349   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:34.295367   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:34.804275   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:35.290745   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:35.794796   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:36.294136   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:36.799771   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:37.029000   10384 kubeadm.go:1107] duration metric: took 11.9815852s to wait for elevateKubeSystemPrivileges
	W0415 18:22:37.029063   10384 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0415 18:22:37.029138   10384 kubeadm.go:393] duration metric: took 30.249681s to StartCluster
	I0415 18:22:37.029138   10384 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:37.029339   10384 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 18:22:37.031101   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:37.032659   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0415 18:22:37.032659   10384 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0415 18:22:37.032732   10384 addons.go:69] Setting storage-provisioner=true in profile "ha-653100"
	I0415 18:22:37.032806   10384 addons.go:234] Setting addon storage-provisioner=true in "ha-653100"
	I0415 18:22:37.032841   10384 addons.go:69] Setting default-storageclass=true in profile "ha-653100"
	I0415 18:22:37.032891   10384 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-653100"
	I0415 18:22:37.032987   10384 host.go:66] Checking if "ha-653100" exists ...
	I0415 18:22:37.032579   10384 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.19.63.147 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 18:22:37.033266   10384 start.go:240] waiting for startup goroutines ...
	I0415 18:22:37.033382   10384 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:22:37.033632   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:37.034694   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:37.253650   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0415 18:22:37.698002   10384 start.go:946] {"host.minikube.internal": 172.19.48.1} host record injected into CoreDNS's ConfigMap
	I0415 18:22:39.456374   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:39.456374   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:39.459088   10384 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 18:22:39.457089   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:39.461772   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:39.461772   10384 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 18:22:39.461772   10384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0415 18:22:39.462029   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:39.463267   10384 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 18:22:39.464063   10384 kapi.go:59] client config for ha-653100: &rest.Config{Host:"https://172.19.63.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-653100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-653100\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f71600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0415 18:22:39.466136   10384 cert_rotation.go:137] Starting client certificate rotation controller
	I0415 18:22:39.466794   10384 addons.go:234] Setting addon default-storageclass=true in "ha-653100"
	I0415 18:22:39.466794   10384 host.go:66] Checking if "ha-653100" exists ...
	I0415 18:22:39.466794   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:41.909905   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:41.909905   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:41.909905   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:22:41.955061   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:41.955061   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:41.955625   10384 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0415 18:22:41.955711   10384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0415 18:22:41.955711   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:44.377012   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:44.377012   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:44.377984   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:22:44.805425   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:22:44.805425   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:44.805425   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:22:44.974779   10384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 18:22:47.155103   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:22:47.155103   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:47.156316   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:22:47.304965   10384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0415 18:22:47.473026   10384 round_trippers.go:463] GET https://172.19.63.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0415 18:22:47.473026   10384 round_trippers.go:469] Request Headers:
	I0415 18:22:47.473026   10384 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:22:47.473026   10384 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:22:47.488496   10384 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0415 18:22:47.490408   10384 round_trippers.go:463] PUT https://172.19.63.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0415 18:22:47.490526   10384 round_trippers.go:469] Request Headers:
	I0415 18:22:47.490526   10384 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:22:47.490526   10384 round_trippers.go:473]     Content-Type: application/json
	I0415 18:22:47.490526   10384 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:22:47.494518   10384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:22:47.498597   10384 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0415 18:22:47.501457   10384 addons.go:505] duration metric: took 10.468136s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0415 18:22:47.501457   10384 start.go:245] waiting for cluster config update ...
	I0415 18:22:47.501457   10384 start.go:254] writing updated cluster config ...
	I0415 18:22:47.503984   10384 out.go:177] 
	I0415 18:22:47.513974   10384 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:22:47.513974   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:22:47.518979   10384 out.go:177] * Starting "ha-653100-m02" control-plane node in "ha-653100" cluster
	I0415 18:22:47.524981   10384 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:22:47.524981   10384 cache.go:56] Caching tarball of preloaded images
	I0415 18:22:47.526030   10384 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 18:22:47.526235   10384 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 18:22:47.526401   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:22:47.528481   10384 start.go:360] acquireMachinesLock for ha-653100-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 18:22:47.528921   10384 start.go:364] duration metric: took 121.6µs to acquireMachinesLock for "ha-653100-m02"
	I0415 18:22:47.529077   10384 start.go:93] Provisioning new machine with config: &{Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-653100 Namespace:default APIServerHAVIP:172.19.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.63.147 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 18:22:47.529280   10384 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0415 18:22:47.540485   10384 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 18:22:47.541556   10384 start.go:159] libmachine.API.Create for "ha-653100" (driver="hyperv")
	I0415 18:22:47.541556   10384 client.go:168] LocalClient.Create starting
	I0415 18:22:47.542079   10384 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0415 18:22:47.542415   10384 main.go:141] libmachine: Decoding PEM data...
	I0415 18:22:47.542415   10384 main.go:141] libmachine: Parsing certificate...
	I0415 18:22:47.542700   10384 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0415 18:22:47.542700   10384 main.go:141] libmachine: Decoding PEM data...
	I0415 18:22:47.542700   10384 main.go:141] libmachine: Parsing certificate...
	I0415 18:22:47.542700   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0415 18:22:49.574978   10384 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0415 18:22:49.574978   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:49.576110   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0415 18:22:51.479178   10384 main.go:141] libmachine: [stdout =====>] : False
	
	I0415 18:22:51.479178   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:51.479600   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 18:22:53.065829   10384 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 18:22:53.066593   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:53.066593   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 18:22:57.052062   10384 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 18:22:57.052234   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:57.055252   10384 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0415 18:22:57.583068   10384 main.go:141] libmachine: Creating SSH key...
	I0415 18:22:57.931279   10384 main.go:141] libmachine: Creating VM...
	I0415 18:22:57.931279   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 18:23:01.081349   10384 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 18:23:01.081349   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:01.082298   10384 main.go:141] libmachine: Using switch "Default Switch"
	I0415 18:23:01.082375   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 18:23:02.972464   10384 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 18:23:02.972464   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:02.972464   10384 main.go:141] libmachine: Creating VHD
	I0415 18:23:02.973018   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0415 18:23:06.989219   10384 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : DEE7E17F-5E93-468C-BA30-08390D1CA178
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0415 18:23:06.989219   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:06.989219   10384 main.go:141] libmachine: Writing magic tar header
	I0415 18:23:06.989219   10384 main.go:141] libmachine: Writing SSH key tar header
	I0415 18:23:06.990286   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0415 18:23:10.344718   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:10.344718   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:10.344872   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\disk.vhd' -SizeBytes 20000MB
	I0415 18:23:13.048066   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:13.048981   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:13.049137   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-653100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0415 18:23:17.000979   10384 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-653100-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0415 18:23:17.000979   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:17.001667   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-653100-m02 -DynamicMemoryEnabled $false
	I0415 18:23:19.529184   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:19.529184   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:19.529184   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-653100-m02 -Count 2
	I0415 18:23:21.929952   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:21.929952   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:21.930071   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-653100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\boot2docker.iso'
	I0415 18:23:24.786919   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:24.786919   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:24.786919   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-653100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\disk.vhd'
	I0415 18:23:27.665809   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:27.665809   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:27.665809   10384 main.go:141] libmachine: Starting VM...
	I0415 18:23:27.666001   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-653100-m02
	I0415 18:23:31.102209   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:31.103144   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:31.103144   10384 main.go:141] libmachine: Waiting for host to start...
	I0415 18:23:31.103144   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:33.569054   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:33.569054   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:33.569054   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:23:36.303048   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:36.303048   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:37.312865   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:39.749364   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:39.749620   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:39.749702   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:23:42.512466   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:42.512842   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:43.518477   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:45.904872   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:45.904872   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:45.905633   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:23:48.594507   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:48.594669   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:49.606615   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:51.980362   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:51.981179   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:51.981179   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:23:54.737668   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:54.738407   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:55.749257   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:58.134602   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:58.134602   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:58.135468   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:00.918915   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:00.919329   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:00.919408   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:03.202618   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:03.202618   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:03.202618   10384 machine.go:94] provisionDockerMachine start ...
	I0415 18:24:03.202618   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:05.548511   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:05.548511   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:05.549191   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:08.289644   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:08.290567   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:08.299809   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:08.300714   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:08.300714   10384 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 18:24:08.446422   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0415 18:24:08.446972   10384 buildroot.go:166] provisioning hostname "ha-653100-m02"
	I0415 18:24:08.446972   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:10.773426   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:10.773426   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:10.773426   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:13.530172   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:13.530172   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:13.536850   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:13.537708   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:13.537708   10384 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-653100-m02 && echo "ha-653100-m02" | sudo tee /etc/hostname
	I0415 18:24:13.707716   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-653100-m02
	
	I0415 18:24:13.707716   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:16.005330   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:16.005330   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:16.005330   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:18.762850   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:18.762850   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:18.770232   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:18.770901   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:18.770901   10384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-653100-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-653100-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-653100-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 18:24:18.936615   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 18:24:18.936615   10384 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0415 18:24:18.937152   10384 buildroot.go:174] setting up certificates
	I0415 18:24:18.937207   10384 provision.go:84] configureAuth start
	I0415 18:24:18.937207   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:21.299996   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:21.299996   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:21.300197   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:24.133316   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:24.134096   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:24.134153   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:26.489254   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:26.489254   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:26.489549   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:29.236160   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:29.236160   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:29.236234   10384 provision.go:143] copyHostCerts
	I0415 18:24:29.236417   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0415 18:24:29.236539   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0415 18:24:29.236539   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0415 18:24:29.237340   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0415 18:24:29.238595   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0415 18:24:29.238972   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0415 18:24:29.238972   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0415 18:24:29.239408   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0415 18:24:29.240639   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0415 18:24:29.240835   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0415 18:24:29.240835   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0415 18:24:29.241419   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0415 18:24:29.242408   10384 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-653100-m02 san=[127.0.0.1 172.19.63.104 ha-653100-m02 localhost minikube]
	I0415 18:24:29.398831   10384 provision.go:177] copyRemoteCerts
	I0415 18:24:29.412927   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 18:24:29.412927   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:31.723514   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:31.723616   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:31.723616   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:34.496654   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:34.496654   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:34.497398   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:24:34.615182   10384 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.2022138s)
	I0415 18:24:34.615182   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0415 18:24:34.615849   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 18:24:34.668445   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0415 18:24:34.668971   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0415 18:24:34.720499   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0415 18:24:34.721156   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0415 18:24:34.770381   10384 provision.go:87] duration metric: took 15.8330476s to configureAuth
	I0415 18:24:34.770381   10384 buildroot.go:189] setting minikube options for container-runtime
	I0415 18:24:34.770381   10384 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:24:34.770381   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:37.079755   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:37.080689   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:37.080689   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:39.859679   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:39.859754   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:39.866117   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:39.866820   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:39.866820   10384 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 18:24:40.015731   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0415 18:24:40.015731   10384 buildroot.go:70] root file system type: tmpfs
	I0415 18:24:40.015731   10384 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 18:24:40.015731   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:42.404944   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:42.405443   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:42.405443   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:45.210326   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:45.210813   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:45.216335   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:45.216939   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:45.216939   10384 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.63.147"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 18:24:45.394927   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.63.147
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 18:24:45.395706   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:47.711900   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:47.712499   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:47.712595   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:50.491344   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:50.491344   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:50.502173   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:50.502173   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:50.502173   10384 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 18:24:52.836243   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0415 18:24:52.836243   10384 machine.go:97] duration metric: took 49.6332282s to provisionDockerMachine
	I0415 18:24:52.836243   10384 client.go:171] duration metric: took 2m5.2936865s to LocalClient.Create
	I0415 18:24:52.836243   10384 start.go:167] duration metric: took 2m5.2936865s to libmachine.API.Create "ha-653100"
	I0415 18:24:52.836243   10384 start.go:293] postStartSetup for "ha-653100-m02" (driver="hyperv")
	I0415 18:24:52.836243   10384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 18:24:52.850899   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 18:24:52.851896   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:55.199036   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:55.199775   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:55.199775   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:58.012510   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:58.013353   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:58.013914   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:24:58.132196   10384 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.2802026s)
	I0415 18:24:58.147452   10384 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 18:24:58.154532   10384 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 18:24:58.154532   10384 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0415 18:24:58.155095   10384 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0415 18:24:58.156186   10384 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> 112722.pem in /etc/ssl/certs
	I0415 18:24:58.156186   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /etc/ssl/certs/112722.pem
	I0415 18:24:58.170256   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 18:24:58.189873   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /etc/ssl/certs/112722.pem (1708 bytes)
	I0415 18:24:58.243032   10384 start.go:296] duration metric: took 5.4067454s for postStartSetup
	I0415 18:24:58.246437   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:00.550399   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:00.550399   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:00.550894   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:03.289044   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:03.289044   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:03.289835   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:25:03.292186   10384 start.go:128] duration metric: took 2m15.7618211s to createHost
	I0415 18:25:03.292186   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:05.668753   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:05.668753   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:05.668966   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:08.439658   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:08.439658   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:08.447000   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:25:08.447864   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:25:08.447864   10384 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0415 18:25:08.589758   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713205508.597287833
	
	I0415 18:25:08.589758   10384 fix.go:216] guest clock: 1713205508.597287833
	I0415 18:25:08.589758   10384 fix.go:229] Guest: 2024-04-15 18:25:08.597287833 +0000 UTC Remote: 2024-04-15 18:25:03.2921862 +0000 UTC m=+360.055147501 (delta=5.305101633s)
	I0415 18:25:08.590328   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:10.915118   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:10.915118   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:10.916067   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:13.650013   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:13.650612   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:13.656497   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:25:13.657104   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:25:13.657182   10384 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713205508
	I0415 18:25:13.813133   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 15 18:25:08 UTC 2024
	
	I0415 18:25:13.813133   10384 fix.go:236] clock set: Mon Apr 15 18:25:08 UTC 2024
	 (err=<nil>)
	I0415 18:25:13.813133   10384 start.go:83] releasing machines lock for "ha-653100-m02", held for 2m26.2829576s
	I0415 18:25:13.813133   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:16.141194   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:16.141380   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:16.141380   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:18.957495   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:18.957495   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:18.960756   10384 out.go:177] * Found network options:
	I0415 18:25:18.964431   10384 out.go:177]   - NO_PROXY=172.19.63.147
	W0415 18:25:18.966627   10384 proxy.go:119] fail to check proxy env: Error ip not in block
	I0415 18:25:18.969406   10384 out.go:177]   - NO_PROXY=172.19.63.147
	W0415 18:25:18.972226   10384 proxy.go:119] fail to check proxy env: Error ip not in block
	W0415 18:25:18.975235   10384 proxy.go:119] fail to check proxy env: Error ip not in block
	I0415 18:25:18.977840   10384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 18:25:18.977840   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:18.990793   10384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0415 18:25:18.990793   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:21.355429   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:21.355429   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:21.355429   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:21.374654   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:21.374654   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:21.374654   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:24.278775   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:24.279572   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:24.280405   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:25:24.306668   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:24.306668   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:24.308123   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:25:24.386474   10384 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.3956377s)
	W0415 18:25:24.386474   10384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 18:25:24.404866   10384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 18:25:24.481327   10384 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 18:25:24.481327   10384 start.go:494] detecting cgroup driver to use...
	I0415 18:25:24.481327   10384 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.5034427s)
	I0415 18:25:24.481327   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:25:24.536359   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0415 18:25:24.572347   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 18:25:24.593352   10384 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 18:25:24.610729   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 18:25:24.650456   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:25:24.693297   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 18:25:24.730594   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:25:24.771078   10384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 18:25:24.812358   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 18:25:24.854948   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 18:25:24.893956   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 18:25:24.934484   10384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 18:25:24.974849   10384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 18:25:25.012928   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:25:25.269094   10384 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 18:25:25.319374   10384 start.go:494] detecting cgroup driver to use...
	I0415 18:25:25.334757   10384 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0415 18:25:25.382030   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:25:25.422509   10384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 18:25:25.496212   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:25:25.539556   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:25:25.586254   10384 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0415 18:25:25.665807   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:25:25.697619   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:25:25.754485   10384 ssh_runner.go:195] Run: which cri-dockerd
	I0415 18:25:25.776463   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0415 18:25:25.798310   10384 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0415 18:25:25.849027   10384 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0415 18:25:26.103040   10384 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0415 18:25:26.311089   10384 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0415 18:25:26.311089   10384 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0415 18:25:26.371946   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:25:26.596000   10384 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 18:26:27.765978   10384 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1694886s)
	I0415 18:26:27.781002   10384 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0415 18:26:27.817233   10384 out.go:177] 
	W0415 18:26:27.820189   10384 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 15 18:24:51 ha-653100-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 15 18:24:51 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:51.175281888Z" level=info msg="Starting up"
	Apr 15 18:24:51 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:51.176817321Z" level=info msg="containerd not running, starting managed containerd"
	Apr 15 18:24:51 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:51.181288215Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.216362257Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243075421Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243180523Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243245725Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243263625Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243358927Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243375528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243544331Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243714535Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243739035Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243751135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243859138Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.244478651Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.247680919Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.247787921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248037026Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248177629Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248295531Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248444935Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248541437Z" level=info msg="metadata content store policy set" policy=shared
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279315587Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279443690Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279651894Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279764797Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279791497Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.280197206Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.280884220Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281341330Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281485733Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281516134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281561035Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281615936Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281641736Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281663737Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281686937Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281709538Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281727638Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281747238Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281777139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281801640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281822540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281844040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281864141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281895342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282030744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282122446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282152747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282178548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282205748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282227849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282250949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282279750Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282310750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282329151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282347551Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282407752Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282432753Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282447653Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282465554Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282584456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282620757Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282637557Z" level=info msg="NRI interface is disabled by configuration."
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.283743481Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.283842283Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.283903984Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.284335093Z" level=info msg="containerd successfully booted in 0.071116s"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.254240790Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.289190582Z" level=info msg="Loading containers: start."
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.609124512Z" level=info msg="Loading containers: done."
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.636265777Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.636518080Z" level=info msg="Daemon has completed initialization"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.840822625Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.841084828Z" level=info msg="API listen on [::]:2376"
	Apr 15 18:24:52 ha-653100-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 15 18:25:26 ha-653100-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.632253775Z" level=info msg="Processing signal 'terminated'"
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.634242462Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.635132157Z" level=info msg="Daemon shutdown complete"
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.635380455Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.635547254Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 15 18:25:27 ha-653100-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 15 18:25:27 ha-653100-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 15 18:25:27 ha-653100-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 15 18:25:27 ha-653100-m02 dockerd[1016]: time="2024-04-15T18:25:27.736568730Z" level=info msg="Starting up"
	Apr 15 18:26:27 ha-653100-m02 dockerd[1016]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 15 18:26:27 ha-653100-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 15 18:26:27 ha-653100-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 15 18:26:27 ha-653100-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0415 18:26:27.820189   10384 out.go:239] * 
	W0415 18:26:27.821891   10384 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 18:26:27.843940   10384 out.go:177] 
	
	
	==> Docker <==
	Apr 15 18:40:18 ha-653100 dockerd[1321]: 2024/04/15 18:40:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:40:18 ha-653100 dockerd[1321]: 2024/04/15 18:40:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:40:18 ha-653100 dockerd[1321]: 2024/04/15 18:40:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:40:18 ha-653100 dockerd[1321]: 2024/04/15 18:40:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:40:18 ha-653100 dockerd[1321]: 2024/04/15 18:40:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:40:18 ha-653100 dockerd[1321]: 2024/04/15 18:40:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:40:19 ha-653100 dockerd[1321]: 2024/04/15 18:40:19 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:40:19 ha-653100 dockerd[1321]: 2024/04/15 18:40:19 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:40:19 ha-653100 dockerd[1321]: 2024/04/15 18:40:19 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:45:04 ha-653100 dockerd[1321]: 2024/04/15 18:45:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:45:04 ha-653100 dockerd[1321]: 2024/04/15 18:45:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:45:05 ha-653100 dockerd[1321]: 2024/04/15 18:45:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:45:05 ha-653100 dockerd[1321]: 2024/04/15 18:45:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:45:05 ha-653100 dockerd[1321]: 2024/04/15 18:45:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:45:05 ha-653100 dockerd[1321]: 2024/04/15 18:45:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:45:05 ha-653100 dockerd[1321]: 2024/04/15 18:45:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:45:05 ha-653100 dockerd[1321]: 2024/04/15 18:45:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:46:01 ha-653100 dockerd[1321]: 2024/04/15 18:46:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:46:01 ha-653100 dockerd[1321]: 2024/04/15 18:46:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:46:02 ha-653100 dockerd[1321]: 2024/04/15 18:46:02 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:46:02 ha-653100 dockerd[1321]: 2024/04/15 18:46:02 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:46:02 ha-653100 dockerd[1321]: 2024/04/15 18:46:02 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:46:02 ha-653100 dockerd[1321]: 2024/04/15 18:46:02 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:46:02 ha-653100 dockerd[1321]: 2024/04/15 18:46:02 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:46:02 ha-653100 dockerd[1321]: 2024/04/15 18:46:02 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3810def19c30b       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago      Running             busybox                   0                   4ba88ccaba1a5       busybox-7fdf7869d9-5w5x4
	58d38dcc399d7       cbb01a7bd410d                                                                                         24 minutes ago      Running             coredns                   0                   66b040582e9fe       coredns-76f75df574-hz5n2
	7f2e95849717e       6e38f40d628db                                                                                         24 minutes ago      Running             storage-provisioner       0                   41946a72e3913       storage-provisioner
	79df4cc493ccd       cbb01a7bd410d                                                                                         24 minutes ago      Running             coredns                   0                   c2bc3be2dada4       coredns-76f75df574-sw766
	8533539a42fc8       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              24 minutes ago      Running             kindnet-cni               0                   840d4c720c681       kindnet-k8jt8
	ece5eb28b20be       a1d263b5dc5b0                                                                                         24 minutes ago      Running             kube-proxy                0                   590527a253a30       kube-proxy-dgh6m
	0cf5b602fc0c4       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     25 minutes ago      Running             kube-vip                  0                   71c70584ee9c6       kube-vip-ha-653100
	a0697c56404b8       6052a25da3f97                                                                                         25 minutes ago      Running             kube-controller-manager   0                   5c4190df9fb18       kube-controller-manager-ha-653100
	d68da55f0f382       8c390d98f50c0                                                                                         25 minutes ago      Running             kube-scheduler            0                   92e96b6d41bb2       kube-scheduler-ha-653100
	b7958fc0d30b8       39f995c9f1996                                                                                         25 minutes ago      Running             kube-apiserver            0                   a7b3e44514ced       kube-apiserver-ha-653100
	a0fa6c17de399       3861cfcd7c04c                                                                                         25 minutes ago      Running             etcd                      0                   65fe5df3a93dd       etcd-ha-653100
	
	
	==> coredns [58d38dcc399d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e2b9de1191510a72356755223f06623b152d8cdd72ea393cca47fb3d34a5414574050e77e521fd64fc84b7e18fcd0fb5ead79ecf0a5a8be221bd0ffeb8c0080c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45845 - 8967 "HINFO IN 8354542665525626293.2689365418710486320. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.045113649s
	[INFO] 10.244.0.4:51221 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.078581836s
	[INFO] 10.244.0.4:47875 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.52769764s
	[INFO] 10.244.0.4:52717 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000306601s
	[INFO] 10.244.0.4:39163 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.050987688s
	[INFO] 10.244.0.4:37816 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001543s
	[INFO] 10.244.0.4:60144 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.014447825s
	[INFO] 10.244.0.4:55552 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000204001s
	[INFO] 10.244.0.4:36177 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153901s
	[INFO] 10.244.0.4:46410 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000283001s
	[INFO] 10.244.0.4:57190 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168701s
	[INFO] 10.244.0.4:47185 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002385s
	[INFO] 10.244.0.4:34139 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001337s
	[INFO] 10.244.0.4:51029 - 5 "PTR IN 1.48.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000098701s
	
	
	==> coredns [79df4cc493cc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e2b9de1191510a72356755223f06623b152d8cdd72ea393cca47fb3d34a5414574050e77e521fd64fc84b7e18fcd0fb5ead79ecf0a5a8be221bd0ffeb8c0080c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57426 - 12156 "HINFO IN 2507889984284766848.6813386495577107890. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.33687907s
	[INFO] 10.244.0.4:40226 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000337201s
	[INFO] 10.244.0.4:56672 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.049146285s
	[INFO] 10.244.0.4:54337 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001723s
	[INFO] 10.244.0.4:58976 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0002015s
	[INFO] 10.244.0.4:41149 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00024s
	[INFO] 10.244.0.4:37438 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000310601s
	[INFO] 10.244.0.4:54099 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0002466s
	
	
	==> describe nodes <==
	Name:               ha-653100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-653100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c
	                    minikube.k8s.io/name=ha-653100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_15T18_22_25_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 18:22:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-653100
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Apr 2024 18:47:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Apr 2024 18:42:51 +0000   Mon, 15 Apr 2024 18:22:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Apr 2024 18:42:51 +0000   Mon, 15 Apr 2024 18:22:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Apr 2024 18:42:51 +0000   Mon, 15 Apr 2024 18:22:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Apr 2024 18:42:51 +0000   Mon, 15 Apr 2024 18:22:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.63.147
	  Hostname:    ha-653100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 d7ba8367096d4bf9b0e4541361a84287
	  System UUID:                64d5f641-1f2f-ce46-8918-a08d661c1258
	  Boot ID:                    994d41df-0ae9-4f39-ad28-f5e794182c63
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-5w5x4             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-76f75df574-hz5n2             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     24m
	  kube-system                 coredns-76f75df574-sw766             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     24m
	  kube-system                 etcd-ha-653100                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         24m
	  kube-system                 kindnet-k8jt8                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-apiserver-ha-653100             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-controller-manager-ha-653100    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-proxy-dgh6m                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-scheduler-ha-653100             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-vip-ha-653100                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24m   kube-proxy       
	  Normal  Starting                 24m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24m   kubelet          Node ha-653100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m   kubelet          Node ha-653100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m   kubelet          Node ha-653100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           24m   node-controller  Node ha-653100 event: Registered Node ha-653100 in Controller
	  Normal  NodeReady                24m   kubelet          Node ha-653100 status is now: NodeReady
	
	
	Name:               ha-653100-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-653100-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c
	                    minikube.k8s.io/name=ha-653100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_15T18_43_42_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 18:43:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-653100-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Apr 2024 18:47:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Apr 2024 18:44:12 +0000   Mon, 15 Apr 2024 18:43:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Apr 2024 18:44:12 +0000   Mon, 15 Apr 2024 18:43:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Apr 2024 18:44:12 +0000   Mon, 15 Apr 2024 18:43:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Apr 2024 18:44:12 +0000   Mon, 15 Apr 2024 18:44:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.51.108
	  Hostname:    ha-653100-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 ceb376f540fe4419a1393b81dd4c70ec
	  System UUID:                316f69f2-57b1-1a4d-9808-3339f6c9e586
	  Boot ID:                    231d6308-8f63-4640-95e7-8ba95af6dfa1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-rtbf9       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m36s
	  kube-system                 kube-proxy-kvnct    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m24s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m36s (x2 over 3m36s)  kubelet          Node ha-653100-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m36s (x2 over 3m36s)  kubelet          Node ha-653100-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m36s (x2 over 3m36s)  kubelet          Node ha-653100-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m35s                  node-controller  Node ha-653100-m03 event: Registered Node ha-653100-m03 in Controller
	  Normal  NodeReady                3m15s                  kubelet          Node ha-653100-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.084698] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.376265] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr15 18:21] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.209937] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[ +33.615481] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.104388] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.615924] systemd-fstab-generator[983]: Ignoring "noauto" option for root device
	[  +0.216331] systemd-fstab-generator[995]: Ignoring "noauto" option for root device
	[  +0.260985] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[  +2.876807] systemd-fstab-generator[1179]: Ignoring "noauto" option for root device
	[  +0.212935] systemd-fstab-generator[1191]: Ignoring "noauto" option for root device
	[  +0.227831] systemd-fstab-generator[1203]: Ignoring "noauto" option for root device
	[  +0.311128] systemd-fstab-generator[1218]: Ignoring "noauto" option for root device
	[Apr15 18:22] systemd-fstab-generator[1312]: Ignoring "noauto" option for root device
	[  +0.114802] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.164512] systemd-fstab-generator[1517]: Ignoring "noauto" option for root device
	[  +7.677617] systemd-fstab-generator[1722]: Ignoring "noauto" option for root device
	[  +0.108322] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.774902] kauditd_printk_skb: 67 callbacks suppressed
	[  +5.244487] systemd-fstab-generator[2220]: Ignoring "noauto" option for root device
	[ +14.155639] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.279744] kauditd_printk_skb: 29 callbacks suppressed
	[Apr15 18:27] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [a0fa6c17de39] <==
	{"level":"info","ts":"2024-04-15T18:42:17.443797Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2021,"took":"11.390416ms","hash":1421491769,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1802240,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-04-15T18:42:17.443837Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1421491769,"revision":2021,"compact-revision":1485}
	{"level":"info","ts":"2024-04-15T18:43:33.518295Z","caller":"traceutil/trace.go:171","msg":"trace[443010453] transaction","detail":"{read_only:false; response_revision:2695; number_of_response:1; }","duration":"266.440557ms","start":"2024-04-15T18:43:33.251827Z","end":"2024-04-15T18:43:33.518267Z","steps":["trace[443010453] 'process raft request'  (duration: 266.058657ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:34.010261Z","caller":"traceutil/trace.go:171","msg":"trace[907999705] linearizableReadLoop","detail":"{readStateIndex:2969; appliedIndex:2968; }","duration":"127.140571ms","start":"2024-04-15T18:43:33.882928Z","end":"2024-04-15T18:43:34.010069Z","steps":["trace[907999705] 'read index received'  (duration: 126.931871ms)","trace[907999705] 'applied index is now lower than readState.Index'  (duration: 208.1µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-15T18:43:34.010558Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.698771ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-15T18:43:34.01062Z","caller":"traceutil/trace.go:171","msg":"trace[1177634589] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2696; }","duration":"127.798872ms","start":"2024-04-15T18:43:33.882811Z","end":"2024-04-15T18:43:34.01061Z","steps":["trace[1177634589] 'agreement among raft nodes before linearized reading'  (duration: 127.591271ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:34.010373Z","caller":"traceutil/trace.go:171","msg":"trace[320563100] transaction","detail":"{read_only:false; response_revision:2696; number_of_response:1; }","duration":"232.738612ms","start":"2024-04-15T18:43:33.777617Z","end":"2024-04-15T18:43:34.010356Z","steps":["trace[320563100] 'process raft request'  (duration: 232.232111ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:34.712206Z","caller":"traceutil/trace.go:171","msg":"trace[711121958] transaction","detail":"{read_only:false; response_revision:2697; number_of_response:1; }","duration":"181.877144ms","start":"2024-04-15T18:43:34.530256Z","end":"2024-04-15T18:43:34.712133Z","steps":["trace[711121958] 'process raft request'  (duration: 181.582843ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:46.38138Z","caller":"traceutil/trace.go:171","msg":"trace[31759011] transaction","detail":"{read_only:false; response_revision:2751; number_of_response:1; }","duration":"240.29982ms","start":"2024-04-15T18:43:46.141059Z","end":"2024-04-15T18:43:46.381359Z","steps":["trace[31759011] 'process raft request'  (duration: 230.957808ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:52.283476Z","caller":"traceutil/trace.go:171","msg":"trace[1889997840] linearizableReadLoop","detail":"{readStateIndex:3044; appliedIndex:3043; }","duration":"110.447446ms","start":"2024-04-15T18:43:52.17301Z","end":"2024-04-15T18:43:52.283458Z","steps":["trace[1889997840] 'read index received'  (duration: 110.306146ms)","trace[1889997840] 'applied index is now lower than readState.Index'  (duration: 140.7µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-15T18:43:52.283605Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.572946ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/172.19.63.147\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-04-15T18:43:52.283637Z","caller":"traceutil/trace.go:171","msg":"trace[10951613] range","detail":"{range_begin:/registry/masterleases/172.19.63.147; range_end:; response_count:1; response_revision:2766; }","duration":"110.637847ms","start":"2024-04-15T18:43:52.17299Z","end":"2024-04-15T18:43:52.283628Z","steps":["trace[10951613] 'agreement among raft nodes before linearized reading'  (duration: 110.561546ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:52.283903Z","caller":"traceutil/trace.go:171","msg":"trace[1508396561] transaction","detail":"{read_only:false; response_revision:2766; number_of_response:1; }","duration":"114.807253ms","start":"2024-04-15T18:43:52.169084Z","end":"2024-04-15T18:43:52.283892Z","steps":["trace[1508396561] 'process raft request'  (duration: 114.280152ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T18:43:52.666427Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.266757ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14279945624074152814 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:462c8ee2fed56b6d>","response":"size:41"}
	{"level":"info","ts":"2024-04-15T18:43:52.667005Z","caller":"traceutil/trace.go:171","msg":"trace[1721457016] linearizableReadLoop","detail":"{readStateIndex:3045; appliedIndex:3044; }","duration":"237.167715ms","start":"2024-04-15T18:43:52.429394Z","end":"2024-04-15T18:43:52.666562Z","steps":["trace[1721457016] 'read index received'  (duration: 43.566658ms)","trace[1721457016] 'applied index is now lower than readState.Index'  (duration: 193.598957ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-15T18:43:52.667407Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T18:43:52.285771Z","time spent":"381.633207ms","remote":"127.0.0.1:45166","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2024-04-15T18:43:52.66813Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"230.156306ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1110"}
	{"level":"warn","ts":"2024-04-15T18:43:52.66875Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"239.352418ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-653100-m03\" ","response":"range_response_count:1 size:3120"}
	{"level":"info","ts":"2024-04-15T18:43:52.668806Z","caller":"traceutil/trace.go:171","msg":"trace[2016319950] range","detail":"{range_begin:/registry/minions/ha-653100-m03; range_end:; response_count:1; response_revision:2766; }","duration":"239.433618ms","start":"2024-04-15T18:43:52.429363Z","end":"2024-04-15T18:43:52.668797Z","steps":["trace[2016319950] 'agreement among raft nodes before linearized reading'  (duration: 239.350018ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:52.668276Z","caller":"traceutil/trace.go:171","msg":"trace[416735202] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2766; }","duration":"230.317306ms","start":"2024-04-15T18:43:52.437947Z","end":"2024-04-15T18:43:52.668265Z","steps":["trace[416735202] 'agreement among raft nodes before linearized reading'  (duration: 230.123406ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:52.788795Z","caller":"traceutil/trace.go:171","msg":"trace[396711083] transaction","detail":"{read_only:false; response_revision:2768; number_of_response:1; }","duration":"109.505445ms","start":"2024-04-15T18:43:52.679272Z","end":"2024-04-15T18:43:52.788777Z","steps":["trace[396711083] 'process raft request'  (duration: 102.216136ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:57.928969Z","caller":"traceutil/trace.go:171","msg":"trace[2037072140] transaction","detail":"{read_only:false; response_revision:2782; number_of_response:1; }","duration":"141.624188ms","start":"2024-04-15T18:43:57.787327Z","end":"2024-04-15T18:43:57.928951Z","steps":["trace[2037072140] 'process raft request'  (duration: 141.090687ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:47:17.452824Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2557}
	{"level":"info","ts":"2024-04-15T18:47:17.463722Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2557,"took":"10.434855ms","hash":4268225983,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1937408,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2024-04-15T18:47:17.464008Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4268225983,"revision":2557,"compact-revision":2021}
	
	
	==> kernel <==
	 18:47:17 up 27 min,  0 users,  load average: 0.70, 0.40, 0.31
	Linux ha-653100 5.10.207 #1 SMP Mon Apr 15 15:01:07 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8533539a42fc] <==
	I0415 18:46:07.744085       1 main.go:250] Node ha-653100-m03 has CIDR [10.244.1.0/24] 
	I0415 18:46:17.759914       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:46:17.760354       1 main.go:227] handling current node
	I0415 18:46:17.760466       1 main.go:223] Handling node with IPs: map[172.19.51.108:{}]
	I0415 18:46:17.760706       1 main.go:250] Node ha-653100-m03 has CIDR [10.244.1.0/24] 
	I0415 18:46:27.775869       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:46:27.776068       1 main.go:227] handling current node
	I0415 18:46:27.776088       1 main.go:223] Handling node with IPs: map[172.19.51.108:{}]
	I0415 18:46:27.776098       1 main.go:250] Node ha-653100-m03 has CIDR [10.244.1.0/24] 
	I0415 18:46:37.785651       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:46:37.785760       1 main.go:227] handling current node
	I0415 18:46:37.785792       1 main.go:223] Handling node with IPs: map[172.19.51.108:{}]
	I0415 18:46:37.785801       1 main.go:250] Node ha-653100-m03 has CIDR [10.244.1.0/24] 
	I0415 18:46:47.795872       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:46:47.795921       1 main.go:227] handling current node
	I0415 18:46:47.795935       1 main.go:223] Handling node with IPs: map[172.19.51.108:{}]
	I0415 18:46:47.795943       1 main.go:250] Node ha-653100-m03 has CIDR [10.244.1.0/24] 
	I0415 18:46:57.807095       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:46:57.807448       1 main.go:227] handling current node
	I0415 18:46:57.807778       1 main.go:223] Handling node with IPs: map[172.19.51.108:{}]
	I0415 18:46:57.808005       1 main.go:250] Node ha-653100-m03 has CIDR [10.244.1.0/24] 
	I0415 18:47:07.818778       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:47:07.818905       1 main.go:227] handling current node
	I0415 18:47:07.818922       1 main.go:223] Handling node with IPs: map[172.19.51.108:{}]
	I0415 18:47:07.818931       1 main.go:250] Node ha-653100-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [b7958fc0d30b] <==
	I0415 18:22:19.472339       1 autoregister_controller.go:141] Starting autoregister controller
	I0415 18:22:19.472452       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0415 18:22:19.472462       1 cache.go:39] Caches are synced for autoregister controller
	I0415 18:22:19.498049       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0415 18:22:19.510348       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0415 18:22:20.354035       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0415 18:22:20.363724       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0415 18:22:20.363838       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0415 18:22:21.763949       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0415 18:22:21.866542       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0415 18:22:22.100224       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0415 18:22:22.118571       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.63.147]
	I0415 18:22:22.120605       1 controller.go:624] quota admission added evaluator for: endpoints
	I0415 18:22:22.130952       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0415 18:22:22.385516       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0415 18:22:24.016138       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0415 18:22:24.048032       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0415 18:22:24.081226       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0415 18:22:36.868875       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0415 18:22:36.898745       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0415 18:43:52.764570       1 trace.go:236] Trace[705971869]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.19.63.147,type:*v1.Endpoints,resource:apiServerIPInfo (15-Apr-2024 18:43:52.172) (total time: 592ms):
	Trace[705971869]: ---"initial value restored" 112ms (18:43:52.284)
	Trace[705971869]: ---"Transaction prepared" 384ms (18:43:52.669)
	Trace[705971869]: ---"Txn call completed" 95ms (18:43:52.764)
	Trace[705971869]: [592.295387ms] [592.295387ms] END
	
	
	==> kube-controller-manager [a0697c56404b] <==
	I0415 18:22:50.009242       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="326.398µs"
	I0415 18:22:50.048064       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="90.899µs"
	I0415 18:22:51.764868       1 node_lifecycle_controller.go:1045] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0415 18:22:52.188891       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="82.302µs"
	I0415 18:22:52.287692       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="37.246165ms"
	I0415 18:22:52.288055       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="224.505µs"
	I0415 18:22:52.333123       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="22.028652ms"
	I0415 18:22:52.333675       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="426.909µs"
	I0415 18:27:05.738408       1 event.go:376] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7fdf7869d9 to 3"
	I0415 18:27:05.789870       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-5w5x4"
	I0415 18:27:05.841032       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-8pgjv"
	I0415 18:27:05.849328       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-tk6sh"
	I0415 18:27:05.899441       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="160.526833ms"
	I0415 18:27:05.957239       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="57.716604ms"
	I0415 18:27:05.998341       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="40.497733ms"
	I0415 18:27:05.998579       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="95.3µs"
	I0415 18:27:09.211983       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="14.949061ms"
	I0415 18:27:09.212464       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="29.3µs"
	I0415 18:43:41.144789       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-653100-m03\" does not exist"
	I0415 18:43:41.155868       1 range_allocator.go:380] "Set node PodCIDR" node="ha-653100-m03" podCIDRs=["10.244.1.0/24"]
	I0415 18:43:41.176203       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rtbf9"
	I0415 18:43:41.176231       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-kvnct"
	I0415 18:43:42.027348       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-653100-m03"
	I0415 18:43:42.028227       1 event.go:376] "Event occurred" object="ha-653100-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-653100-m03 event: Registered Node ha-653100-m03 in Controller"
	I0415 18:44:02.707914       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-653100-m03"
	
	
	==> kube-proxy [ece5eb28b20b] <==
	I0415 18:22:38.391716       1 server_others.go:72] "Using iptables proxy"
	I0415 18:22:38.407680       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.63.147"]
	I0415 18:22:38.495319       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0415 18:22:38.495346       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0415 18:22:38.495361       1 server_others.go:168] "Using iptables Proxier"
	I0415 18:22:38.500785       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0415 18:22:38.501443       1 server.go:865] "Version info" version="v1.29.3"
	I0415 18:22:38.501468       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0415 18:22:38.503945       1 config.go:188] "Starting service config controller"
	I0415 18:22:38.504041       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0415 18:22:38.504268       1 config.go:97] "Starting endpoint slice config controller"
	I0415 18:22:38.504770       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0415 18:22:38.505829       1 config.go:315] "Starting node config controller"
	I0415 18:22:38.507970       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0415 18:22:38.605316       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0415 18:22:38.605583       1 shared_informer.go:318] Caches are synced for service config
	I0415 18:22:38.608238       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [d68da55f0f38] <==
	W0415 18:22:20.533571       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0415 18:22:20.533671       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0415 18:22:20.559089       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0415 18:22:20.559148       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0415 18:22:20.566941       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0415 18:22:20.569271       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0415 18:22:20.649432       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0415 18:22:20.649545       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0415 18:22:20.680518       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0415 18:22:20.681133       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0415 18:22:20.703015       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0415 18:22:20.703474       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0415 18:22:20.766338       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0415 18:22:20.766458       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0415 18:22:20.789649       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0415 18:22:20.790593       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0415 18:22:20.803334       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0415 18:22:20.804054       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0415 18:22:20.808728       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0415 18:22:20.809130       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0415 18:22:20.838937       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0415 18:22:20.841219       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0415 18:22:20.865287       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0415 18:22:20.865345       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0415 18:22:22.187395       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 15 18:42:24 ha-653100 kubelet[2226]: E0415 18:42:24.245239    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:42:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:42:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:42:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:42:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 18:43:24 ha-653100 kubelet[2226]: E0415 18:43:24.244761    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:43:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:43:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:43:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:43:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 18:44:24 ha-653100 kubelet[2226]: E0415 18:44:24.244027    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:44:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:44:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:44:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:44:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 18:45:24 ha-653100 kubelet[2226]: E0415 18:45:24.245433    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:45:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:45:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:45:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:45:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 18:46:24 ha-653100 kubelet[2226]: E0415 18:46:24.244695    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:46:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:46:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:46:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:46:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:47:08.852642    3084 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-653100 -n ha-653100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-653100 -n ha-653100: (13.2208139s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-653100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7fdf7869d9-tk6sh
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/CopyFile]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-653100 describe pod busybox-7fdf7869d9-tk6sh
helpers_test.go:282: (dbg) kubectl --context ha-653100 describe pod busybox-7fdf7869d9-tk6sh:

                                                
                                                
-- stdout --
	Name:             busybox-7fdf7869d9-tk6sh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7fdf7869d9
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7fdf7869d9
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rjshx (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-rjshx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  5m9s (x4 over 20m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  9s                  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/CopyFile (75.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (127.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-653100 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p ha-653100 node stop m02 -v=7 --alsologtostderr: (1m10.4686205s)
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-653100 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-653100 status -v=7 --alsologtostderr: exit status 1 (20.2667369s)

                                                
                                                
** stderr ** 
	W0415 18:48:43.120564   10796 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0415 18:48:43.213816   10796 out.go:291] Setting OutFile to fd 644 ...
	I0415 18:48:43.214467   10796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:48:43.214467   10796 out.go:304] Setting ErrFile to fd 984...
	I0415 18:48:43.214467   10796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:48:43.230797   10796 out.go:298] Setting JSON to false
	I0415 18:48:43.230797   10796 mustload.go:65] Loading cluster: ha-653100
	I0415 18:48:43.230797   10796 notify.go:220] Checking for updates...
	I0415 18:48:43.231878   10796 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:48:43.232495   10796 status.go:255] checking status of ha-653100 ...
	I0415 18:48:43.233116   10796 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:48:45.606271   10796 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:48:45.606271   10796 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:48:45.606271   10796 status.go:330] ha-653100 host status = "Running" (err=<nil>)
	I0415 18:48:45.606271   10796 host.go:66] Checking if "ha-653100" exists ...
	I0415 18:48:45.610924   10796 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:48:47.962268   10796 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:48:47.962268   10796 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:48:47.962816   10796 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:48:50.774588   10796 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:48:50.774588   10796 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:48:50.775498   10796 host.go:66] Checking if "ha-653100" exists ...
	I0415 18:48:50.790020   10796 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 18:48:50.791022   10796 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:48:53.115037   10796 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:48:53.115037   10796 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:48:53.115107   10796 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:48:55.929926   10796 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:48:55.929976   10796 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:48:55.929976   10796 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:48:56.035690   10796 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.2456277s)
	I0415 18:48:56.052135   10796 ssh_runner.go:195] Run: systemctl --version
	I0415 18:48:56.078780   10796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 18:48:56.108577   10796 kubeconfig.go:125] found "ha-653100" server: "https://172.19.63.254:8443"
	I0415 18:48:56.108577   10796 api_server.go:166] Checking apiserver status ...
	I0415 18:48:56.125181   10796 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 18:48:56.171795   10796 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2037/cgroup
	W0415 18:48:56.200360   10796 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2037/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0415 18:48:56.216878   10796 ssh_runner.go:195] Run: ls
	I0415 18:48:56.224475   10796 api_server.go:253] Checking apiserver healthz at https://172.19.63.254:8443/healthz ...
	I0415 18:48:56.234059   10796 api_server.go:279] https://172.19.63.254:8443/healthz returned 200:
	ok
	I0415 18:48:56.234059   10796 status.go:422] ha-653100 apiserver status = Running (err=<nil>)
	I0415 18:48:56.234059   10796 status.go:257] ha-653100 status: &{Name:ha-653100 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 18:48:56.234059   10796 status.go:255] checking status of ha-653100-m02 ...
	I0415 18:48:56.235770   10796 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:48:58.526517   10796 main.go:141] libmachine: [stdout =====>] : Off
	
	I0415 18:48:58.526653   10796 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:48:58.526653   10796 status.go:330] ha-653100-m02 host status = "Stopped" (err=<nil>)
	I0415 18:48:58.526653   10796 status.go:343] host is not running, skipping remaining checks
	I0415 18:48:58.526653   10796 status.go:257] ha-653100-m02 status: &{Name:ha-653100-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 18:48:58.526811   10796 status.go:255] checking status of ha-653100-m03 ...
	I0415 18:48:58.527649   10796 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m03 ).state
	I0415 18:49:00.854512   10796 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:49:00.854744   10796 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:49:00.854744   10796 status.go:330] ha-653100-m03 host status = "Running" (err=<nil>)
	I0415 18:49:00.854813   10796 host.go:66] Checking if "ha-653100-m03" exists ...
	I0415 18:49:00.855479   10796 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m03 ).state
	I0415 18:49:03.152761   10796 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:49:03.152761   10796 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:49:03.152761   10796 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m03 ).networkadapters[0]).ipaddresses[0]

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-653100 status -v=7 --alsologtostderr" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-653100 -n ha-653100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-653100 -n ha-653100: (13.2572799s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-653100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-653100 logs -n 25: (9.103371s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:37 UTC | 15 Apr 24 18:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:38 UTC | 15 Apr 24 18:38 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:38 UTC | 15 Apr 24 18:38 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | busybox-7fdf7869d9-5w5x4 --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-8pgjv --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-tk6sh --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | busybox-7fdf7869d9-5w5x4 --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-8pgjv --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-tk6sh --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | busybox-7fdf7869d9-5w5x4 -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-8pgjv -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-tk6sh -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- get pods -o          | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC | 15 Apr 24 18:39 UTC |
	|         | busybox-7fdf7869d9-5w5x4             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-5w5x4 -- sh       |           |                   |                |                     |                     |
	|         | -c ping -c 1 172.19.48.1             |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-8pgjv             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| kubectl | -p ha-653100 -- exec                 | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:39 UTC |                     |
	|         | busybox-7fdf7869d9-tk6sh             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| node    | add -p ha-653100 -v=7                | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:40 UTC | 15 Apr 24 18:44 UTC |
	|         | --alsologtostderr                    |           |                   |                |                     |                     |
	| node    | ha-653100 node stop m02 -v=7         | ha-653100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:47 UTC | 15 Apr 24 18:48 UTC |
	|         | --alsologtostderr                    |           |                   |                |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 18:19:03
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 18:19:03.428900   10384 out.go:291] Setting OutFile to fd 956 ...
	I0415 18:19:03.429535   10384 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:19:03.429535   10384 out.go:304] Setting ErrFile to fd 892...
	I0415 18:19:03.429535   10384 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:19:03.456152   10384 out.go:298] Setting JSON to false
	I0415 18:19:03.460969   10384 start.go:129] hostinfo: {"hostname":"minikube6","uptime":16870,"bootTime":1713188273,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0415 18:19:03.460969   10384 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 18:19:03.468944   10384 out.go:177] * [ha-653100] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0415 18:19:03.471713   10384 notify.go:220] Checking for updates...
	I0415 18:19:03.474175   10384 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 18:19:03.479852   10384 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 18:19:03.482821   10384 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0415 18:19:03.485193   10384 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 18:19:03.488098   10384 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 18:19:03.491472   10384 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 18:19:09.177227   10384 out.go:177] * Using the hyperv driver based on user configuration
	I0415 18:19:09.180711   10384 start.go:297] selected driver: hyperv
	I0415 18:19:09.180711   10384 start.go:901] validating driver "hyperv" against <nil>
	I0415 18:19:09.180711   10384 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 18:19:09.231415   10384 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 18:19:09.233116   10384 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 18:19:09.233296   10384 cni.go:84] Creating CNI manager for ""
	I0415 18:19:09.233296   10384 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0415 18:19:09.233296   10384 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0415 18:19:09.233503   10384 start.go:340] cluster config:
	{Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-653100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 18:19:09.233896   10384 iso.go:125] acquiring lock: {Name:mkb11aac800c033551a31c7a773c0461f92e4459 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 18:19:09.237716   10384 out.go:177] * Starting "ha-653100" primary control-plane node in "ha-653100" cluster
	I0415 18:19:09.241624   10384 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:19:09.241887   10384 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 18:19:09.241939   10384 cache.go:56] Caching tarball of preloaded images
	I0415 18:19:09.242318   10384 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 18:19:09.242373   10384 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 18:19:09.243280   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:19:09.243280   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json: {Name:mk9fcf3e86096a1c3d878c2c5f55d5a5acd00e38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:19:09.244971   10384 start.go:360] acquireMachinesLock for ha-653100: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 18:19:09.244971   10384 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-653100"
	I0415 18:19:09.244971   10384 start.go:93] Provisioning new machine with config: &{Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-653100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 18:19:09.244971   10384 start.go:125] createHost starting for "" (driver="hyperv")
	I0415 18:19:09.247899   10384 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 18:19:09.247899   10384 start.go:159] libmachine.API.Create for "ha-653100" (driver="hyperv")
	I0415 18:19:09.247899   10384 client.go:168] LocalClient.Create starting
	I0415 18:19:09.248830   10384 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0415 18:19:09.249101   10384 main.go:141] libmachine: Decoding PEM data...
	I0415 18:19:09.249148   10384 main.go:141] libmachine: Parsing certificate...
	I0415 18:19:09.249148   10384 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0415 18:19:09.249148   10384 main.go:141] libmachine: Decoding PEM data...
	I0415 18:19:09.249148   10384 main.go:141] libmachine: Parsing certificate...
	I0415 18:19:09.249731   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0415 18:19:11.419777   10384 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0415 18:19:11.419777   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:11.420812   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0415 18:19:13.280108   10384 main.go:141] libmachine: [stdout =====>] : False
	
	I0415 18:19:13.280108   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:13.280637   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 18:19:14.855241   10384 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 18:19:14.855241   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:14.855241   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 18:19:18.733923   10384 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 18:19:18.734210   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:18.736243   10384 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0415 18:19:19.289879   10384 main.go:141] libmachine: Creating SSH key...
	I0415 18:19:19.400622   10384 main.go:141] libmachine: Creating VM...
	I0415 18:19:19.400622   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 18:19:22.473592   10384 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 18:19:22.473592   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:22.473592   10384 main.go:141] libmachine: Using switch "Default Switch"
	I0415 18:19:22.473592   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 18:19:24.358372   10384 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 18:19:24.358573   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:24.358573   10384 main.go:141] libmachine: Creating VHD
	I0415 18:19:24.358573   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\fixed.vhd' -SizeBytes 10MB -Fixed
	I0415 18:19:28.369440   10384 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 650E0F4D-34EC-4EE4-B011-F395B7FC2B3C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0415 18:19:28.369525   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:28.369525   10384 main.go:141] libmachine: Writing magic tar header
	I0415 18:19:28.369609   10384 main.go:141] libmachine: Writing SSH key tar header
	I0415 18:19:28.379115   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\disk.vhd' -VHDType Dynamic -DeleteSource
	I0415 18:19:31.701668   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:31.701668   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:31.702065   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\disk.vhd' -SizeBytes 20000MB
	I0415 18:19:34.409230   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:34.409287   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:34.409287   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-653100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0415 18:19:38.391213   10384 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-653100 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0415 18:19:38.391365   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:38.391448   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-653100 -DynamicMemoryEnabled $false
	I0415 18:19:40.850920   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:40.851446   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:40.851446   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-653100 -Count 2
	I0415 18:19:43.184748   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:43.185230   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:43.185314   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-653100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\boot2docker.iso'
	I0415 18:19:45.947867   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:45.947867   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:45.948906   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-653100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\disk.vhd'
	I0415 18:19:48.807697   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:48.807697   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:48.807697   10384 main.go:141] libmachine: Starting VM...
	I0415 18:19:48.808056   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-653100
	I0415 18:19:52.116173   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:52.117205   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:52.117205   10384 main.go:141] libmachine: Waiting for host to start...
	I0415 18:19:52.117276   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:19:54.557809   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:19:54.558376   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:54.558452   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:19:57.250722   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:19:57.250722   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:19:58.258291   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:00.584210   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:00.584210   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:00.584448   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:03.246620   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:20:03.247582   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:04.255962   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:06.600399   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:06.600459   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:06.600459   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:09.316612   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:20:09.316612   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:10.317022   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:12.741666   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:12.741972   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:12.742046   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:15.418020   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:20:15.418020   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:16.427460   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:18.790469   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:18.790783   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:18.790783   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:21.596566   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:21.596566   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:21.597345   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:23.951579   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:23.951579   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:23.951579   10384 machine.go:94] provisionDockerMachine start ...
	I0415 18:20:23.952606   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:26.247912   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:26.248135   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:26.248135   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:29.012297   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:29.012297   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:29.019039   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:20:29.032591   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:20:29.032673   10384 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 18:20:29.165965   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0415 18:20:29.165965   10384 buildroot.go:166] provisioning hostname "ha-653100"
	I0415 18:20:29.165965   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:31.462885   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:31.462885   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:31.462973   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:34.155427   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:34.156301   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:34.162944   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:20:34.163526   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:20:34.163526   10384 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-653100 && echo "ha-653100" | sudo tee /etc/hostname
	I0415 18:20:34.337418   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-653100
	
	I0415 18:20:34.337418   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:36.655518   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:36.655518   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:36.655812   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:39.380784   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:39.380784   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:39.389453   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:20:39.390401   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:20:39.390401   10384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-653100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-653100/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-653100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 18:20:39.543028   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 18:20:39.543028   10384 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0415 18:20:39.543028   10384 buildroot.go:174] setting up certificates
	I0415 18:20:39.543028   10384 provision.go:84] configureAuth start
	I0415 18:20:39.543611   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:41.851405   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:41.851611   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:41.851695   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:44.624640   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:44.624640   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:44.624640   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:46.878650   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:46.878650   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:46.879166   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:49.633681   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:49.633926   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:49.633926   10384 provision.go:143] copyHostCerts
	I0415 18:20:49.633926   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0415 18:20:49.634462   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0415 18:20:49.634462   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0415 18:20:49.635297   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0415 18:20:49.637549   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0415 18:20:49.637813   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0415 18:20:49.637813   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0415 18:20:49.637813   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0415 18:20:49.639233   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0415 18:20:49.639233   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0415 18:20:49.639233   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0415 18:20:49.639935   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0415 18:20:49.640957   10384 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-653100 san=[127.0.0.1 172.19.63.147 ha-653100 localhost minikube]
	I0415 18:20:49.905880   10384 provision.go:177] copyRemoteCerts
	I0415 18:20:49.922553   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 18:20:49.922553   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:52.259882   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:52.259882   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:52.259882   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:20:54.984473   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:20:54.984473   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:54.984987   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:20:55.101879   10384 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1791462s)
	I0415 18:20:55.101879   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0415 18:20:55.102059   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 18:20:55.153442   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0415 18:20:55.153917   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0415 18:20:55.199876   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0415 18:20:55.200448   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0415 18:20:55.254511   10384 provision.go:87] duration metric: took 15.7112643s to configureAuth
	I0415 18:20:55.254511   10384 buildroot.go:189] setting minikube options for container-runtime
	I0415 18:20:55.255352   10384 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:20:55.255474   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:20:57.547699   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:20:57.547699   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:20:57.547786   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:00.303241   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:00.303241   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:00.309852   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:00.310680   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:00.310680   10384 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 18:21:00.455641   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0415 18:21:00.455641   10384 buildroot.go:70] root file system type: tmpfs
	I0415 18:21:00.455641   10384 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 18:21:00.455641   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:02.740065   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:02.740065   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:02.740841   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:05.487209   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:05.487209   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:05.492437   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:05.493558   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:05.493558   10384 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 18:21:05.663243   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 18:21:05.663359   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:07.945804   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:07.946031   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:07.946031   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:10.668442   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:10.668442   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:10.674981   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:10.675100   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:10.675100   10384 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 18:21:12.959357   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0415 18:21:12.959357   10384 machine.go:97] duration metric: took 49.0073804s to provisionDockerMachine
	I0415 18:21:12.959357   10384 client.go:171] duration metric: took 2m3.7104605s to LocalClient.Create
	I0415 18:21:12.959357   10384 start.go:167] duration metric: took 2m3.7104605s to libmachine.API.Create "ha-653100"
	I0415 18:21:12.959357   10384 start.go:293] postStartSetup for "ha-653100" (driver="hyperv")
	I0415 18:21:12.959357   10384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 18:21:12.974666   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 18:21:12.974666   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:15.275980   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:15.275980   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:15.275980   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:18.019740   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:18.019762   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:18.019878   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:21:18.139960   10384 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1652527s)
	I0415 18:21:18.155380   10384 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 18:21:18.164559   10384 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 18:21:18.164559   10384 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0415 18:21:18.165434   10384 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0415 18:21:18.166112   10384 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> 112722.pem in /etc/ssl/certs
	I0415 18:21:18.166112   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /etc/ssl/certs/112722.pem
	I0415 18:21:18.180084   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 18:21:18.200844   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /etc/ssl/certs/112722.pem (1708 bytes)
	I0415 18:21:18.250132   10384 start.go:296] duration metric: took 5.2907331s for postStartSetup
	I0415 18:21:18.253937   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:20.531894   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:20.531894   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:20.531894   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:23.259067   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:23.259480   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:23.259754   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:21:23.262894   10384 start.go:128] duration metric: took 2m14.0167978s to createHost
	I0415 18:21:23.262950   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:25.573334   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:25.573334   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:25.573334   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:28.294984   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:28.295213   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:28.304032   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:28.304955   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:28.304955   10384 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0415 18:21:28.441121   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713205288.448859419
	
	I0415 18:21:28.441191   10384 fix.go:216] guest clock: 1713205288.448859419
	I0415 18:21:28.441191   10384 fix.go:229] Guest: 2024-04-15 18:21:28.448859419 +0000 UTC Remote: 2024-04-15 18:21:23.2629505 +0000 UTC m=+140.027670501 (delta=5.185908919s)
	I0415 18:21:28.441272   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:30.726887   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:30.727164   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:30.727164   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:33.517730   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:33.518861   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:33.525281   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:21:33.525856   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.147 22 <nil> <nil>}
	I0415 18:21:33.525856   10384 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713205288
	I0415 18:21:33.684173   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 15 18:21:28 UTC 2024
	
	I0415 18:21:33.684173   10384 fix.go:236] clock set: Mon Apr 15 18:21:28 UTC 2024
	 (err=<nil>)
	I0415 18:21:33.684173   10384 start.go:83] releasing machines lock for "ha-653100", held for 2m24.4380391s
	I0415 18:21:33.684173   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:35.959004   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:35.959004   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:35.959004   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:38.693038   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:38.693586   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:38.698246   10384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 18:21:38.698432   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:38.709918   10384 ssh_runner.go:195] Run: cat /version.json
	I0415 18:21:38.709918   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:21:41.101868   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:41.102451   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:21:43.920818   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:43.920972   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:43.921214   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:21:43.967273   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:21:43.967331   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:21:43.967331   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:21:44.091517   10384 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3921768s)
	I0415 18:21:44.091595   10384 ssh_runner.go:235] Completed: cat /version.json: (5.3815555s)
	I0415 18:21:44.105965   10384 ssh_runner.go:195] Run: systemctl --version
	I0415 18:21:44.128397   10384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0415 18:21:44.135680   10384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 18:21:44.149066   10384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 18:21:44.177790   10384 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 18:21:44.177790   10384 start.go:494] detecting cgroup driver to use...
	I0415 18:21:44.177790   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:21:44.228163   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0415 18:21:44.262529   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 18:21:44.285370   10384 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 18:21:44.301154   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 18:21:44.336472   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:21:44.370998   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 18:21:44.404889   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:21:44.438672   10384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 18:21:44.473968   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 18:21:44.507568   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 18:21:44.541278   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 18:21:44.574748   10384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 18:21:44.615798   10384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 18:21:44.656765   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:21:44.866329   10384 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 18:21:44.902355   10384 start.go:494] detecting cgroup driver to use...
	I0415 18:21:44.917364   10384 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0415 18:21:44.958576   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:21:44.995083   10384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 18:21:45.045436   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:21:45.084274   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:21:45.126708   10384 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0415 18:21:45.197837   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:21:45.224449   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:21:45.274212   10384 ssh_runner.go:195] Run: which cri-dockerd
	I0415 18:21:45.295670   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0415 18:21:45.317816   10384 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0415 18:21:45.364867   10384 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0415 18:21:45.594504   10384 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0415 18:21:45.794998   10384 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0415 18:21:45.795406   10384 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0415 18:21:45.851288   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:21:46.067106   10384 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 18:21:48.625712   10384 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5574711s)
	I0415 18:21:48.640151   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0415 18:21:48.681058   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 18:21:48.721545   10384 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0415 18:21:48.945328   10384 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0415 18:21:49.172462   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:21:49.400402   10384 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0415 18:21:49.448539   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 18:21:49.489496   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:21:49.703253   10384 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0415 18:21:49.816658   10384 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0415 18:21:49.830904   10384 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0415 18:21:49.840743   10384 start.go:562] Will wait 60s for crictl version
	I0415 18:21:49.855288   10384 ssh_runner.go:195] Run: which crictl
	I0415 18:21:49.875869   10384 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0415 18:21:49.936713   10384 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0415 18:21:49.947981   10384 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 18:21:49.993965   10384 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 18:21:50.032420   10384 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0415 18:21:50.032553   10384 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0415 18:21:50.037021   10384 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0415 18:21:50.037021   10384 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0415 18:21:50.037021   10384 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0415 18:21:50.037021   10384 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:d7:0e Flags:up|broadcast|multicast|running}
	I0415 18:21:50.039971   10384 ip.go:210] interface addr: fe80::6b0:6318:bc6e:fcda/64
	I0415 18:21:50.039971   10384 ip.go:210] interface addr: 172.19.48.1/20
	I0415 18:21:50.056064   10384 ssh_runner.go:195] Run: grep 172.19.48.1	host.minikube.internal$ /etc/hosts
	I0415 18:21:50.062649   10384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 18:21:50.097930   10384 kubeadm.go:877] updating cluster {Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:ha-653100 Namespace:default APIServerHAVIP:172.19.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.63.147 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0415 18:21:50.097930   10384 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:21:50.108473   10384 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 18:21:50.131644   10384 docker.go:685] Got preloaded images: 
	I0415 18:21:50.132600   10384 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0415 18:21:50.146104   10384 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0415 18:21:50.181885   10384 ssh_runner.go:195] Run: which lz4
	I0415 18:21:50.188111   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0415 18:21:50.202072   10384 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0415 18:21:50.209107   10384 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0415 18:21:50.209107   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0415 18:21:52.413614   10384 docker.go:649] duration metric: took 2.2254854s to copy over tarball
	I0415 18:21:52.429279   10384 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0415 18:22:01.379987   10384 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.9504893s)
	I0415 18:22:01.379987   10384 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0415 18:22:01.455511   10384 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0415 18:22:01.477182   10384 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0415 18:22:01.536289   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:22:01.768214   10384 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 18:22:04.398301   10384 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6300657s)
	I0415 18:22:04.408551   10384 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 18:22:04.433417   10384 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0415 18:22:04.433417   10384 cache_images.go:84] Images are preloaded, skipping loading
	I0415 18:22:04.433417   10384 kubeadm.go:928] updating node { 172.19.63.147 8443 v1.29.3 docker true true} ...
	I0415 18:22:04.433417   10384 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-653100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.63.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-653100 Namespace:default APIServerHAVIP:172.19.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0415 18:22:04.444220   10384 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0415 18:22:04.490342   10384 cni.go:84] Creating CNI manager for ""
	I0415 18:22:04.490402   10384 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 18:22:04.490472   10384 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0415 18:22:04.490526   10384 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.63.147 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-653100 NodeName:ha-653100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.63.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.63.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0415 18:22:04.490735   10384 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.63.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-653100"
	  kubeletExtraArgs:
	    node-ip: 172.19.63.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.63.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0415 18:22:04.490884   10384 kube-vip.go:111] generating kube-vip config ...
	I0415 18:22:04.505496   10384 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0415 18:22:04.536495   10384 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0415 18:22:04.536752   10384 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.63.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0415 18:22:04.551207   10384 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0415 18:22:04.567905   10384 binaries.go:44] Found k8s binaries, skipping transfer
	I0415 18:22:04.582348   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0415 18:22:04.604171   10384 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0415 18:22:04.646000   10384 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0415 18:22:04.692832   10384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0415 18:22:04.728604   10384 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1351 bytes)
	I0415 18:22:04.775922   10384 ssh_runner.go:195] Run: grep 172.19.63.254	control-plane.minikube.internal$ /etc/hosts
	I0415 18:22:04.783742   10384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.63.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 18:22:04.822733   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:22:05.055746   10384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 18:22:05.087598   10384 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100 for IP: 172.19.63.147
	I0415 18:22:05.087652   10384 certs.go:194] generating shared ca certs ...
	I0415 18:22:05.087652   10384 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.088303   10384 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0415 18:22:05.088303   10384 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0415 18:22:05.088915   10384 certs.go:256] generating profile certs ...
	I0415 18:22:05.089546   10384 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.key
	I0415 18:22:05.089739   10384 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.crt with IP's: []
	I0415 18:22:05.327013   10384 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.crt ...
	I0415 18:22:05.328010   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.crt: {Name:mka413e653e113856769234a348385e515e46303 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.329372   10384 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.key ...
	I0415 18:22:05.329372   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\client.key: {Name:mk12a79d6acd7fec5ddd98754bb23ab16e83b4f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.330112   10384 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key.c151ec5c
	I0415 18:22:05.331447   10384 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt.c151ec5c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.63.147 172.19.63.254]
	I0415 18:22:05.565428   10384 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt.c151ec5c ...
	I0415 18:22:05.565428   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt.c151ec5c: {Name:mk5c523ee813d33697660e99fb5da48b385701b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.567434   10384 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key.c151ec5c ...
	I0415 18:22:05.567434   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key.c151ec5c: {Name:mkeadeed87d8879714bf8100a4229bec1246f570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.568511   10384 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt.c151ec5c -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt
	I0415 18:22:05.585425   10384 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key.c151ec5c -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key
	I0415 18:22:05.586963   10384 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key
	I0415 18:22:05.587129   10384 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt with IP's: []
	I0415 18:22:05.748042   10384 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt ...
	I0415 18:22:05.749020   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt: {Name:mk92c7defdccaf790f51e1080d3836b064a3ba9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.749736   10384 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key ...
	I0415 18:22:05.749736   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key: {Name:mk071663552007da34f935841f25d643d746d544 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:05.751078   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0415 18:22:05.751078   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0415 18:22:05.751078   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0415 18:22:05.751078   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0415 18:22:05.752108   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0415 18:22:05.752265   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0415 18:22:05.752517   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0415 18:22:05.761320   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0415 18:22:05.761625   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem (1338 bytes)
	W0415 18:22:05.762397   10384 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272_empty.pem, impossibly tiny 0 bytes
	I0415 18:22:05.762397   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0415 18:22:05.762397   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0415 18:22:05.763315   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0415 18:22:05.763315   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0415 18:22:05.764136   10384 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem (1708 bytes)
	I0415 18:22:05.764433   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /usr/share/ca-certificates/112722.pem
	I0415 18:22:05.764684   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:22:05.764840   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem -> /usr/share/ca-certificates/11272.pem
	I0415 18:22:05.766228   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0415 18:22:05.818285   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0415 18:22:05.869100   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0415 18:22:05.927943   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0415 18:22:05.982236   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0415 18:22:06.033436   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0415 18:22:06.088918   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0415 18:22:06.140228   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0415 18:22:06.194914   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /usr/share/ca-certificates/112722.pem (1708 bytes)
	I0415 18:22:06.244585   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0415 18:22:06.295695   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem --> /usr/share/ca-certificates/11272.pem (1338 bytes)
	I0415 18:22:06.348962   10384 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0415 18:22:06.398272   10384 ssh_runner.go:195] Run: openssl version
	I0415 18:22:06.422630   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112722.pem && ln -fs /usr/share/ca-certificates/112722.pem /etc/ssl/certs/112722.pem"
	I0415 18:22:06.459842   10384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112722.pem
	I0415 18:22:06.467290   10384 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 17:58 /usr/share/ca-certificates/112722.pem
	I0415 18:22:06.480612   10384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112722.pem
	I0415 18:22:06.503535   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112722.pem /etc/ssl/certs/3ec20f2e.0"
	I0415 18:22:06.538561   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0415 18:22:06.572574   10384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:22:06.580950   10384 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:22:06.595127   10384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:22:06.618634   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0415 18:22:06.655478   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11272.pem && ln -fs /usr/share/ca-certificates/11272.pem /etc/ssl/certs/11272.pem"
	I0415 18:22:06.690402   10384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11272.pem
	I0415 18:22:06.698649   10384 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 17:58 /usr/share/ca-certificates/11272.pem
	I0415 18:22:06.712709   10384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11272.pem
	I0415 18:22:06.735899   10384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11272.pem /etc/ssl/certs/51391683.0"
	I0415 18:22:06.771243   10384 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 18:22:06.778754   10384 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0415 18:22:06.779215   10384 kubeadm.go:391] StartCluster: {Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clu
sterName:ha-653100 Namespace:default APIServerHAVIP:172.19.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.63.147 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 18:22:06.790653   10384 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0415 18:22:06.830974   10384 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0415 18:22:06.866829   10384 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0415 18:22:06.900593   10384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0415 18:22:06.925579   10384 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0415 18:22:06.925579   10384 kubeadm.go:156] found existing configuration files:
	
	I0415 18:22:06.940209   10384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0415 18:22:06.959148   10384 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0415 18:22:06.975145   10384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0415 18:22:07.014822   10384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0415 18:22:07.031944   10384 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0415 18:22:07.045919   10384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0415 18:22:07.081479   10384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0415 18:22:07.104063   10384 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0415 18:22:07.117753   10384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0415 18:22:07.151118   10384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0415 18:22:07.171678   10384 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0415 18:22:07.187200   10384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0415 18:22:07.206408   10384 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0415 18:22:07.712971   10384 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0415 18:22:24.172226   10384 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0415 18:22:24.172397   10384 kubeadm.go:309] [preflight] Running pre-flight checks
	I0415 18:22:24.172431   10384 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0415 18:22:24.172431   10384 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0415 18:22:24.172431   10384 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0415 18:22:24.173023   10384 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0415 18:22:24.177821   10384 out.go:204]   - Generating certificates and keys ...
	I0415 18:22:24.178357   10384 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0415 18:22:24.178482   10384 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0415 18:22:24.178638   10384 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-653100 localhost] and IPs [172.19.63.147 127.0.0.1 ::1]
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-653100 localhost] and IPs [172.19.63.147 127.0.0.1 ::1]
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0415 18:22:24.179302   10384 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0415 18:22:24.180240   10384 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0415 18:22:24.180240   10384 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0415 18:22:24.186302   10384 out.go:204]   - Booting up control plane ...
	I0415 18:22:24.187251   10384 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0415 18:22:24.187251   10384 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0415 18:22:24.188243   10384 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0415 18:22:24.188243   10384 kubeadm.go:309] [apiclient] All control plane components are healthy after 9.567962 seconds
	I0415 18:22:24.188243   10384 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0415 18:22:24.188243   10384 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0415 18:22:24.188243   10384 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0415 18:22:24.189243   10384 kubeadm.go:309] [mark-control-plane] Marking the node ha-653100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0415 18:22:24.189243   10384 kubeadm.go:309] [bootstrap-token] Using token: huvy89.hhqbdqsl75p9l7b4
	I0415 18:22:24.194248   10384 out.go:204]   - Configuring RBAC rules ...
	I0415 18:22:24.194248   10384 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0415 18:22:24.194248   10384 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0415 18:22:24.194248   10384 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0415 18:22:24.195682   10384 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0415 18:22:24.195682   10384 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0415 18:22:24.195682   10384 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0415 18:22:24.196372   10384 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0415 18:22:24.196724   10384 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0415 18:22:24.196838   10384 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0415 18:22:24.196838   10384 kubeadm.go:309] 
	I0415 18:22:24.196838   10384 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0415 18:22:24.197084   10384 kubeadm.go:309] 
	I0415 18:22:24.197248   10384 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0415 18:22:24.197248   10384 kubeadm.go:309] 
	I0415 18:22:24.197248   10384 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0415 18:22:24.197432   10384 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0415 18:22:24.197611   10384 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0415 18:22:24.197611   10384 kubeadm.go:309] 
	I0415 18:22:24.197611   10384 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0415 18:22:24.197611   10384 kubeadm.go:309] 
	I0415 18:22:24.197611   10384 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0415 18:22:24.197611   10384 kubeadm.go:309] 
	I0415 18:22:24.197611   10384 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0415 18:22:24.197611   10384 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0415 18:22:24.197611   10384 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0415 18:22:24.198307   10384 kubeadm.go:309] 
	I0415 18:22:24.198307   10384 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0415 18:22:24.198307   10384 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0415 18:22:24.198307   10384 kubeadm.go:309] 
	I0415 18:22:24.198307   10384 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token huvy89.hhqbdqsl75p9l7b4 \
	I0415 18:22:24.198307   10384 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:64433043496e1f3289bbcb278fdd01bdcea4f93c3a13feee16fa4e3db23f59c2 \
	I0415 18:22:24.198307   10384 kubeadm.go:309] 	--control-plane 
	I0415 18:22:24.198307   10384 kubeadm.go:309] 
	I0415 18:22:24.198307   10384 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0415 18:22:24.198307   10384 kubeadm.go:309] 
	I0415 18:22:24.199302   10384 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token huvy89.hhqbdqsl75p9l7b4 \
	I0415 18:22:24.199302   10384 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:64433043496e1f3289bbcb278fdd01bdcea4f93c3a13feee16fa4e3db23f59c2 
	I0415 18:22:24.199302   10384 cni.go:84] Creating CNI manager for ""
	I0415 18:22:24.199302   10384 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 18:22:24.203263   10384 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0415 18:22:24.221247   10384 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0415 18:22:24.229824   10384 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0415 18:22:24.229824   10384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0415 18:22:24.323407   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0415 18:22:25.047319   10384 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0415 18:22:25.062350   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-653100 minikube.k8s.io/updated_at=2024_04_15T18_22_25_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c minikube.k8s.io/name=ha-653100 minikube.k8s.io/primary=true
	I0415 18:22:25.062350   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:25.070326   10384 ops.go:34] apiserver oom_adj: -16
	I0415 18:22:25.284655   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:25.790456   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:26.293504   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:26.795443   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:27.298654   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:27.786190   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:28.286860   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:28.788050   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:29.292845   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:29.794080   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:30.300169   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:30.788471   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:31.295339   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:31.798627   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:32.299958   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:32.791784   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:33.289567   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:33.791349   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:34.295367   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:34.804275   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:35.290745   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:35.794796   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:36.294136   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:36.799771   10384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 18:22:37.029000   10384 kubeadm.go:1107] duration metric: took 11.9815852s to wait for elevateKubeSystemPrivileges
	W0415 18:22:37.029063   10384 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0415 18:22:37.029138   10384 kubeadm.go:393] duration metric: took 30.249681s to StartCluster
	I0415 18:22:37.029138   10384 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:37.029339   10384 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 18:22:37.031101   10384 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:22:37.032659   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0415 18:22:37.032659   10384 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0415 18:22:37.032732   10384 addons.go:69] Setting storage-provisioner=true in profile "ha-653100"
	I0415 18:22:37.032806   10384 addons.go:234] Setting addon storage-provisioner=true in "ha-653100"
	I0415 18:22:37.032841   10384 addons.go:69] Setting default-storageclass=true in profile "ha-653100"
	I0415 18:22:37.032891   10384 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-653100"
	I0415 18:22:37.032987   10384 host.go:66] Checking if "ha-653100" exists ...
	I0415 18:22:37.032579   10384 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.19.63.147 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 18:22:37.033266   10384 start.go:240] waiting for startup goroutines ...
	I0415 18:22:37.033382   10384 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:22:37.033632   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:37.034694   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:37.253650   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0415 18:22:37.698002   10384 start.go:946] {"host.minikube.internal": 172.19.48.1} host record injected into CoreDNS's ConfigMap
	I0415 18:22:39.456374   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:39.456374   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:39.459088   10384 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 18:22:39.457089   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:39.461772   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:39.461772   10384 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 18:22:39.461772   10384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0415 18:22:39.462029   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:39.463267   10384 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 18:22:39.464063   10384 kapi.go:59] client config for ha-653100: &rest.Config{Host:"https://172.19.63.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-653100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-653100\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f71600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0415 18:22:39.466136   10384 cert_rotation.go:137] Starting client certificate rotation controller
	I0415 18:22:39.466794   10384 addons.go:234] Setting addon default-storageclass=true in "ha-653100"
	I0415 18:22:39.466794   10384 host.go:66] Checking if "ha-653100" exists ...
	I0415 18:22:39.466794   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:41.909905   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:41.909905   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:41.909905   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:22:41.955061   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:41.955061   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:41.955625   10384 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0415 18:22:41.955711   10384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0415 18:22:41.955711   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100 ).state
	I0415 18:22:44.377012   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:22:44.377012   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:44.377984   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100 ).networkadapters[0]).ipaddresses[0]
	I0415 18:22:44.805425   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:22:44.805425   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:44.805425   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:22:44.974779   10384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 18:22:47.155103   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.147
	
	I0415 18:22:47.155103   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:47.156316   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100\id_rsa Username:docker}
	I0415 18:22:47.304965   10384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0415 18:22:47.473026   10384 round_trippers.go:463] GET https://172.19.63.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0415 18:22:47.473026   10384 round_trippers.go:469] Request Headers:
	I0415 18:22:47.473026   10384 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:22:47.473026   10384 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:22:47.488496   10384 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0415 18:22:47.490408   10384 round_trippers.go:463] PUT https://172.19.63.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0415 18:22:47.490526   10384 round_trippers.go:469] Request Headers:
	I0415 18:22:47.490526   10384 round_trippers.go:473]     Accept: application/json, */*
	I0415 18:22:47.490526   10384 round_trippers.go:473]     Content-Type: application/json
	I0415 18:22:47.490526   10384 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 18:22:47.494518   10384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 18:22:47.498597   10384 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0415 18:22:47.501457   10384 addons.go:505] duration metric: took 10.468136s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0415 18:22:47.501457   10384 start.go:245] waiting for cluster config update ...
	I0415 18:22:47.501457   10384 start.go:254] writing updated cluster config ...
	I0415 18:22:47.503984   10384 out.go:177] 
	I0415 18:22:47.513974   10384 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:22:47.513974   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:22:47.518979   10384 out.go:177] * Starting "ha-653100-m02" control-plane node in "ha-653100" cluster
	I0415 18:22:47.524981   10384 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:22:47.524981   10384 cache.go:56] Caching tarball of preloaded images
	I0415 18:22:47.526030   10384 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 18:22:47.526235   10384 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 18:22:47.526401   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:22:47.528481   10384 start.go:360] acquireMachinesLock for ha-653100-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 18:22:47.528921   10384 start.go:364] duration metric: took 121.6µs to acquireMachinesLock for "ha-653100-m02"
	I0415 18:22:47.529077   10384 start.go:93] Provisioning new machine with config: &{Name:ha-653100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-653100 Namespace:default APIServerHAVIP:172.19.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.63.147 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 18:22:47.529280   10384 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0415 18:22:47.540485   10384 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 18:22:47.541556   10384 start.go:159] libmachine.API.Create for "ha-653100" (driver="hyperv")
	I0415 18:22:47.541556   10384 client.go:168] LocalClient.Create starting
	I0415 18:22:47.542079   10384 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0415 18:22:47.542415   10384 main.go:141] libmachine: Decoding PEM data...
	I0415 18:22:47.542415   10384 main.go:141] libmachine: Parsing certificate...
	I0415 18:22:47.542700   10384 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0415 18:22:47.542700   10384 main.go:141] libmachine: Decoding PEM data...
	I0415 18:22:47.542700   10384 main.go:141] libmachine: Parsing certificate...
	I0415 18:22:47.542700   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0415 18:22:49.574978   10384 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0415 18:22:49.574978   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:49.576110   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0415 18:22:51.479178   10384 main.go:141] libmachine: [stdout =====>] : False
	
	I0415 18:22:51.479178   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:51.479600   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 18:22:53.065829   10384 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 18:22:53.066593   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:53.066593   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 18:22:57.052062   10384 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 18:22:57.052234   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:22:57.055252   10384 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0415 18:22:57.583068   10384 main.go:141] libmachine: Creating SSH key...
	I0415 18:22:57.931279   10384 main.go:141] libmachine: Creating VM...
	I0415 18:22:57.931279   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 18:23:01.081349   10384 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 18:23:01.081349   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:01.082298   10384 main.go:141] libmachine: Using switch "Default Switch"
	I0415 18:23:01.082375   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 18:23:02.972464   10384 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 18:23:02.972464   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:02.972464   10384 main.go:141] libmachine: Creating VHD
	I0415 18:23:02.973018   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0415 18:23:06.989219   10384 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : DEE7E17F-5E93-468C-BA30-08390D1CA178
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0415 18:23:06.989219   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:06.989219   10384 main.go:141] libmachine: Writing magic tar header
	I0415 18:23:06.989219   10384 main.go:141] libmachine: Writing SSH key tar header
	I0415 18:23:06.990286   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0415 18:23:10.344718   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:10.344718   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:10.344872   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\disk.vhd' -SizeBytes 20000MB
	I0415 18:23:13.048066   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:13.048981   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:13.049137   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-653100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0415 18:23:17.000979   10384 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-653100-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0415 18:23:17.000979   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:17.001667   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-653100-m02 -DynamicMemoryEnabled $false
	I0415 18:23:19.529184   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:19.529184   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:19.529184   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-653100-m02 -Count 2
	I0415 18:23:21.929952   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:21.929952   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:21.930071   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-653100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\boot2docker.iso'
	I0415 18:23:24.786919   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:24.786919   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:24.786919   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-653100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\disk.vhd'
	I0415 18:23:27.665809   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:27.665809   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:27.665809   10384 main.go:141] libmachine: Starting VM...
	I0415 18:23:27.666001   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-653100-m02
	I0415 18:23:31.102209   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:31.103144   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:31.103144   10384 main.go:141] libmachine: Waiting for host to start...
	I0415 18:23:31.103144   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:33.569054   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:33.569054   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:33.569054   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:23:36.303048   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:36.303048   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:37.312865   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:39.749364   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:39.749620   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:39.749702   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:23:42.512466   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:42.512842   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:43.518477   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:45.904872   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:45.904872   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:45.905633   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:23:48.594507   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:48.594669   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:49.606615   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:51.980362   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:51.981179   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:51.981179   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:23:54.737668   10384 main.go:141] libmachine: [stdout =====>] : 
	I0415 18:23:54.738407   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:55.749257   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:23:58.134602   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:23:58.134602   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:23:58.135468   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:00.918915   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:00.919329   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:00.919408   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:03.202618   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:03.202618   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:03.202618   10384 machine.go:94] provisionDockerMachine start ...
	I0415 18:24:03.202618   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:05.548511   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:05.548511   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:05.549191   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:08.289644   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:08.290567   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:08.299809   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:08.300714   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:08.300714   10384 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 18:24:08.446422   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0415 18:24:08.446972   10384 buildroot.go:166] provisioning hostname "ha-653100-m02"
	I0415 18:24:08.446972   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:10.773426   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:10.773426   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:10.773426   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:13.530172   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:13.530172   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:13.536850   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:13.537708   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:13.537708   10384 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-653100-m02 && echo "ha-653100-m02" | sudo tee /etc/hostname
	I0415 18:24:13.707716   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-653100-m02
	
	I0415 18:24:13.707716   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:16.005330   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:16.005330   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:16.005330   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:18.762850   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:18.762850   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:18.770232   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:18.770901   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:18.770901   10384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-653100-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-653100-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-653100-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 18:24:18.936615   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 18:24:18.936615   10384 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0415 18:24:18.937152   10384 buildroot.go:174] setting up certificates
	I0415 18:24:18.937207   10384 provision.go:84] configureAuth start
	I0415 18:24:18.937207   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:21.299996   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:21.299996   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:21.300197   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:24.133316   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:24.134096   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:24.134153   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:26.489254   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:26.489254   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:26.489549   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:29.236160   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:29.236160   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:29.236234   10384 provision.go:143] copyHostCerts
	I0415 18:24:29.236417   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0415 18:24:29.236539   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0415 18:24:29.236539   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0415 18:24:29.237340   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0415 18:24:29.238595   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0415 18:24:29.238972   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0415 18:24:29.238972   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0415 18:24:29.239408   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0415 18:24:29.240639   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0415 18:24:29.240835   10384 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0415 18:24:29.240835   10384 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0415 18:24:29.241419   10384 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0415 18:24:29.242408   10384 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-653100-m02 san=[127.0.0.1 172.19.63.104 ha-653100-m02 localhost minikube]
	I0415 18:24:29.398831   10384 provision.go:177] copyRemoteCerts
	I0415 18:24:29.412927   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 18:24:29.412927   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:31.723514   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:31.723616   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:31.723616   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:34.496654   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:34.496654   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:34.497398   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:24:34.615182   10384 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.2022138s)
	I0415 18:24:34.615182   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0415 18:24:34.615849   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 18:24:34.668445   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0415 18:24:34.668971   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0415 18:24:34.720499   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0415 18:24:34.721156   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0415 18:24:34.770381   10384 provision.go:87] duration metric: took 15.8330476s to configureAuth
	I0415 18:24:34.770381   10384 buildroot.go:189] setting minikube options for container-runtime
	I0415 18:24:34.770381   10384 config.go:182] Loaded profile config "ha-653100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:24:34.770381   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:37.079755   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:37.080689   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:37.080689   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:39.859679   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:39.859754   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:39.866117   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:39.866820   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:39.866820   10384 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 18:24:40.015731   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0415 18:24:40.015731   10384 buildroot.go:70] root file system type: tmpfs
	I0415 18:24:40.015731   10384 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 18:24:40.015731   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:42.404944   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:42.405443   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:42.405443   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:45.210326   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:45.210813   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:45.216335   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:45.216939   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:45.216939   10384 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.63.147"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 18:24:45.394927   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.63.147
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 18:24:45.395706   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:47.711900   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:47.712499   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:47.712595   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:50.491344   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:50.491344   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:50.502173   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:24:50.502173   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:24:50.502173   10384 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 18:24:52.836243   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0415 18:24:52.836243   10384 machine.go:97] duration metric: took 49.6332282s to provisionDockerMachine
	I0415 18:24:52.836243   10384 client.go:171] duration metric: took 2m5.2936865s to LocalClient.Create
	I0415 18:24:52.836243   10384 start.go:167] duration metric: took 2m5.2936865s to libmachine.API.Create "ha-653100"
	I0415 18:24:52.836243   10384 start.go:293] postStartSetup for "ha-653100-m02" (driver="hyperv")
	I0415 18:24:52.836243   10384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 18:24:52.850899   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 18:24:52.851896   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:24:55.199036   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:24:55.199775   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:55.199775   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:24:58.012510   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:24:58.013353   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:24:58.013914   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:24:58.132196   10384 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.2802026s)
	I0415 18:24:58.147452   10384 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 18:24:58.154532   10384 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 18:24:58.154532   10384 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0415 18:24:58.155095   10384 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0415 18:24:58.156186   10384 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> 112722.pem in /etc/ssl/certs
	I0415 18:24:58.156186   10384 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /etc/ssl/certs/112722.pem
	I0415 18:24:58.170256   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 18:24:58.189873   10384 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /etc/ssl/certs/112722.pem (1708 bytes)
	I0415 18:24:58.243032   10384 start.go:296] duration metric: took 5.4067454s for postStartSetup
	I0415 18:24:58.246437   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:00.550399   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:00.550399   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:00.550894   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:03.289044   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:03.289044   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:03.289835   10384 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-653100\config.json ...
	I0415 18:25:03.292186   10384 start.go:128] duration metric: took 2m15.7618211s to createHost
	I0415 18:25:03.292186   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:05.668753   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:05.668753   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:05.668966   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:08.439658   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:08.439658   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:08.447000   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:25:08.447864   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:25:08.447864   10384 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0415 18:25:08.589758   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713205508.597287833
	
	I0415 18:25:08.589758   10384 fix.go:216] guest clock: 1713205508.597287833
	I0415 18:25:08.589758   10384 fix.go:229] Guest: 2024-04-15 18:25:08.597287833 +0000 UTC Remote: 2024-04-15 18:25:03.2921862 +0000 UTC m=+360.055147501 (delta=5.305101633s)
	I0415 18:25:08.590328   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:10.915118   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:10.915118   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:10.916067   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:13.650013   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:13.650612   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:13.656497   10384 main.go:141] libmachine: Using SSH client type: native
	I0415 18:25:13.657104   10384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.63.104 22 <nil> <nil>}
	I0415 18:25:13.657182   10384 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713205508
	I0415 18:25:13.813133   10384 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 15 18:25:08 UTC 2024
	
	I0415 18:25:13.813133   10384 fix.go:236] clock set: Mon Apr 15 18:25:08 UTC 2024
	 (err=<nil>)
	I0415 18:25:13.813133   10384 start.go:83] releasing machines lock for "ha-653100-m02", held for 2m26.2829576s
	I0415 18:25:13.813133   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:16.141194   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:16.141380   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:16.141380   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:18.957495   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:18.957495   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:18.960756   10384 out.go:177] * Found network options:
	I0415 18:25:18.964431   10384 out.go:177]   - NO_PROXY=172.19.63.147
	W0415 18:25:18.966627   10384 proxy.go:119] fail to check proxy env: Error ip not in block
	I0415 18:25:18.969406   10384 out.go:177]   - NO_PROXY=172.19.63.147
	W0415 18:25:18.972226   10384 proxy.go:119] fail to check proxy env: Error ip not in block
	W0415 18:25:18.975235   10384 proxy.go:119] fail to check proxy env: Error ip not in block
	I0415 18:25:18.977840   10384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 18:25:18.977840   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:18.990793   10384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0415 18:25:18.990793   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-653100-m02 ).state
	I0415 18:25:21.355429   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:21.355429   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:21.355429   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:21.374654   10384 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 18:25:21.374654   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:21.374654   10384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-653100-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 18:25:24.278775   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:24.279572   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:24.280405   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:25:24.306668   10384 main.go:141] libmachine: [stdout =====>] : 172.19.63.104
	
	I0415 18:25:24.306668   10384 main.go:141] libmachine: [stderr =====>] : 
	I0415 18:25:24.308123   10384 sshutil.go:53] new ssh client: &{IP:172.19.63.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-653100-m02\id_rsa Username:docker}
	I0415 18:25:24.386474   10384 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.3956377s)
	W0415 18:25:24.386474   10384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 18:25:24.404866   10384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 18:25:24.481327   10384 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 18:25:24.481327   10384 start.go:494] detecting cgroup driver to use...
	I0415 18:25:24.481327   10384 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.5034427s)
	I0415 18:25:24.481327   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:25:24.536359   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0415 18:25:24.572347   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 18:25:24.593352   10384 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 18:25:24.610729   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 18:25:24.650456   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:25:24.693297   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 18:25:24.730594   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 18:25:24.771078   10384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 18:25:24.812358   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 18:25:24.854948   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 18:25:24.893956   10384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 18:25:24.934484   10384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 18:25:24.974849   10384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 18:25:25.012928   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:25:25.269094   10384 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 18:25:25.319374   10384 start.go:494] detecting cgroup driver to use...
	I0415 18:25:25.334757   10384 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0415 18:25:25.382030   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:25:25.422509   10384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 18:25:25.496212   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 18:25:25.539556   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:25:25.586254   10384 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0415 18:25:25.665807   10384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 18:25:25.697619   10384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 18:25:25.754485   10384 ssh_runner.go:195] Run: which cri-dockerd
	I0415 18:25:25.776463   10384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0415 18:25:25.798310   10384 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0415 18:25:25.849027   10384 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0415 18:25:26.103040   10384 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0415 18:25:26.311089   10384 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0415 18:25:26.311089   10384 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0415 18:25:26.371946   10384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:25:26.596000   10384 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 18:26:27.765978   10384 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1694886s)
	I0415 18:26:27.781002   10384 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0415 18:26:27.817233   10384 out.go:177] 
	W0415 18:26:27.820189   10384 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 15 18:24:51 ha-653100-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 15 18:24:51 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:51.175281888Z" level=info msg="Starting up"
	Apr 15 18:24:51 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:51.176817321Z" level=info msg="containerd not running, starting managed containerd"
	Apr 15 18:24:51 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:51.181288215Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.216362257Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243075421Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243180523Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243245725Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243263625Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243358927Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243375528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243544331Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243714535Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243739035Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243751135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.243859138Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.244478651Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.247680919Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.247787921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248037026Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248177629Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248295531Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248444935Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.248541437Z" level=info msg="metadata content store policy set" policy=shared
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279315587Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279443690Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279651894Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279764797Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.279791497Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.280197206Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.280884220Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281341330Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281485733Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281516134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281561035Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281615936Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281641736Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281663737Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281686937Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281709538Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281727638Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281747238Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281777139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281801640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281822540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281844040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281864141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.281895342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282030744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282122446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282152747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282178548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282205748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282227849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282250949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282279750Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282310750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282329151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282347551Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282407752Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282432753Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282447653Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282465554Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282584456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282620757Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.282637557Z" level=info msg="NRI interface is disabled by configuration."
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.283743481Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.283842283Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.283903984Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 15 18:24:51 ha-653100-m02 dockerd[671]: time="2024-04-15T18:24:51.284335093Z" level=info msg="containerd successfully booted in 0.071116s"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.254240790Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.289190582Z" level=info msg="Loading containers: start."
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.609124512Z" level=info msg="Loading containers: done."
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.636265777Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.636518080Z" level=info msg="Daemon has completed initialization"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.840822625Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 15 18:24:52 ha-653100-m02 dockerd[665]: time="2024-04-15T18:24:52.841084828Z" level=info msg="API listen on [::]:2376"
	Apr 15 18:24:52 ha-653100-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 15 18:25:26 ha-653100-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.632253775Z" level=info msg="Processing signal 'terminated'"
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.634242462Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.635132157Z" level=info msg="Daemon shutdown complete"
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.635380455Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 15 18:25:26 ha-653100-m02 dockerd[665]: time="2024-04-15T18:25:26.635547254Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 15 18:25:27 ha-653100-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 15 18:25:27 ha-653100-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 15 18:25:27 ha-653100-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 15 18:25:27 ha-653100-m02 dockerd[1016]: time="2024-04-15T18:25:27.736568730Z" level=info msg="Starting up"
	Apr 15 18:26:27 ha-653100-m02 dockerd[1016]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 15 18:26:27 ha-653100-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 15 18:26:27 ha-653100-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 15 18:26:27 ha-653100-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0415 18:26:27.820189   10384 out.go:239] * 
	W0415 18:26:27.821891   10384 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 18:26:27.843940   10384 out.go:177] 
	
	
	==> Docker <==
	Apr 15 18:40:19 ha-653100 dockerd[1321]: 2024/04/15 18:40:19 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:45:04 ha-653100 dockerd[1321]: 2024/04/15 18:45:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:45:04 ha-653100 dockerd[1321]: 2024/04/15 18:45:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:45:05 ha-653100 dockerd[1321]: 2024/04/15 18:45:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:45:05 ha-653100 dockerd[1321]: 2024/04/15 18:45:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:45:05 ha-653100 dockerd[1321]: 2024/04/15 18:45:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:45:05 ha-653100 dockerd[1321]: 2024/04/15 18:45:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:45:05 ha-653100 dockerd[1321]: 2024/04/15 18:45:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:45:05 ha-653100 dockerd[1321]: 2024/04/15 18:45:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:46:01 ha-653100 dockerd[1321]: 2024/04/15 18:46:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:46:01 ha-653100 dockerd[1321]: 2024/04/15 18:46:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:46:02 ha-653100 dockerd[1321]: 2024/04/15 18:46:02 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:46:02 ha-653100 dockerd[1321]: 2024/04/15 18:46:02 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:46:02 ha-653100 dockerd[1321]: 2024/04/15 18:46:02 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:46:02 ha-653100 dockerd[1321]: 2024/04/15 18:46:02 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:46:02 ha-653100 dockerd[1321]: 2024/04/15 18:46:02 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:46:02 ha-653100 dockerd[1321]: 2024/04/15 18:46:02 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:47:17 ha-653100 dockerd[1321]: 2024/04/15 18:47:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:47:17 ha-653100 dockerd[1321]: 2024/04/15 18:47:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:47:17 ha-653100 dockerd[1321]: 2024/04/15 18:47:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:47:17 ha-653100 dockerd[1321]: 2024/04/15 18:47:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:47:17 ha-653100 dockerd[1321]: 2024/04/15 18:47:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:47:17 ha-653100 dockerd[1321]: 2024/04/15 18:47:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:47:17 ha-653100 dockerd[1321]: 2024/04/15 18:47:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:47:17 ha-653100 dockerd[1321]: 2024/04/15 18:47:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3810def19c30b       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   22 minutes ago      Running             busybox                   0                   4ba88ccaba1a5       busybox-7fdf7869d9-5w5x4
	58d38dcc399d7       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   66b040582e9fe       coredns-76f75df574-hz5n2
	7f2e95849717e       6e38f40d628db                                                                                         26 minutes ago      Running             storage-provisioner       0                   41946a72e3913       storage-provisioner
	79df4cc493ccd       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   c2bc3be2dada4       coredns-76f75df574-sw766
	8533539a42fc8       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              26 minutes ago      Running             kindnet-cni               0                   840d4c720c681       kindnet-k8jt8
	ece5eb28b20be       a1d263b5dc5b0                                                                                         26 minutes ago      Running             kube-proxy                0                   590527a253a30       kube-proxy-dgh6m
	0cf5b602fc0c4       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     27 minutes ago      Running             kube-vip                  0                   71c70584ee9c6       kube-vip-ha-653100
	a0697c56404b8       6052a25da3f97                                                                                         27 minutes ago      Running             kube-controller-manager   0                   5c4190df9fb18       kube-controller-manager-ha-653100
	d68da55f0f382       8c390d98f50c0                                                                                         27 minutes ago      Running             kube-scheduler            0                   92e96b6d41bb2       kube-scheduler-ha-653100
	b7958fc0d30b8       39f995c9f1996                                                                                         27 minutes ago      Running             kube-apiserver            0                   a7b3e44514ced       kube-apiserver-ha-653100
	a0fa6c17de399       3861cfcd7c04c                                                                                         27 minutes ago      Running             etcd                      0                   65fe5df3a93dd       etcd-ha-653100
	
	
	==> coredns [58d38dcc399d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e2b9de1191510a72356755223f06623b152d8cdd72ea393cca47fb3d34a5414574050e77e521fd64fc84b7e18fcd0fb5ead79ecf0a5a8be221bd0ffeb8c0080c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45845 - 8967 "HINFO IN 8354542665525626293.2689365418710486320. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.045113649s
	[INFO] 10.244.0.4:51221 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.078581836s
	[INFO] 10.244.0.4:47875 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.52769764s
	[INFO] 10.244.0.4:52717 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000306601s
	[INFO] 10.244.0.4:39163 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.050987688s
	[INFO] 10.244.0.4:37816 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001543s
	[INFO] 10.244.0.4:60144 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.014447825s
	[INFO] 10.244.0.4:55552 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000204001s
	[INFO] 10.244.0.4:36177 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153901s
	[INFO] 10.244.0.4:46410 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000283001s
	[INFO] 10.244.0.4:57190 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168701s
	[INFO] 10.244.0.4:47185 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002385s
	[INFO] 10.244.0.4:34139 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001337s
	[INFO] 10.244.0.4:51029 - 5 "PTR IN 1.48.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000098701s
	
	
	==> coredns [79df4cc493cc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e2b9de1191510a72356755223f06623b152d8cdd72ea393cca47fb3d34a5414574050e77e521fd64fc84b7e18fcd0fb5ead79ecf0a5a8be221bd0ffeb8c0080c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57426 - 12156 "HINFO IN 2507889984284766848.6813386495577107890. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.33687907s
	[INFO] 10.244.0.4:40226 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000337201s
	[INFO] 10.244.0.4:56672 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.049146285s
	[INFO] 10.244.0.4:54337 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001723s
	[INFO] 10.244.0.4:58976 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0002015s
	[INFO] 10.244.0.4:41149 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00024s
	[INFO] 10.244.0.4:37438 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000310601s
	[INFO] 10.244.0.4:54099 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0002466s
	
	
	==> describe nodes <==
	Name:               ha-653100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-653100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c
	                    minikube.k8s.io/name=ha-653100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_15T18_22_25_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 18:22:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-653100
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Apr 2024 18:49:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Apr 2024 18:47:56 +0000   Mon, 15 Apr 2024 18:22:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Apr 2024 18:47:56 +0000   Mon, 15 Apr 2024 18:22:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Apr 2024 18:47:56 +0000   Mon, 15 Apr 2024 18:22:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Apr 2024 18:47:56 +0000   Mon, 15 Apr 2024 18:22:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.63.147
	  Hostname:    ha-653100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 d7ba8367096d4bf9b0e4541361a84287
	  System UUID:                64d5f641-1f2f-ce46-8918-a08d661c1258
	  Boot ID:                    994d41df-0ae9-4f39-ad28-f5e794182c63
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-5w5x4             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-76f75df574-hz5n2             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 coredns-76f75df574-sw766             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 etcd-ha-653100                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-k8jt8                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-ha-653100             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-ha-653100    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-dgh6m                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-ha-653100             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-vip-ha-653100                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26m   kube-proxy       
	  Normal  Starting                 27m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m   kubelet          Node ha-653100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m   kubelet          Node ha-653100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m   kubelet          Node ha-653100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           26m   node-controller  Node ha-653100 event: Registered Node ha-653100 in Controller
	  Normal  NodeReady                26m   kubelet          Node ha-653100 status is now: NodeReady
	
	
	Name:               ha-653100-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-653100-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c
	                    minikube.k8s.io/name=ha-653100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_15T18_43_42_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 18:43:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-653100-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Apr 2024 18:49:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Apr 2024 18:47:46 +0000   Mon, 15 Apr 2024 18:43:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Apr 2024 18:47:46 +0000   Mon, 15 Apr 2024 18:43:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Apr 2024 18:47:46 +0000   Mon, 15 Apr 2024 18:43:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Apr 2024 18:47:46 +0000   Mon, 15 Apr 2024 18:44:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.51.108
	  Hostname:    ha-653100-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 ceb376f540fe4419a1393b81dd4c70ec
	  System UUID:                316f69f2-57b1-1a4d-9808-3339f6c9e586
	  Boot ID:                    231d6308-8f63-4640-95e7-8ba95af6dfa1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-8pgjv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kindnet-rtbf9               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m44s
	  kube-system                 kube-proxy-kvnct            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m31s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m44s (x2 over 5m44s)  kubelet          Node ha-653100-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m44s (x2 over 5m44s)  kubelet          Node ha-653100-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m44s (x2 over 5m44s)  kubelet          Node ha-653100-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m43s                  node-controller  Node ha-653100-m03 event: Registered Node ha-653100-m03 in Controller
	  Normal  NodeReady                5m23s                  kubelet          Node ha-653100-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.084698] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.376265] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr15 18:21] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.209937] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[ +33.615481] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.104388] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.615924] systemd-fstab-generator[983]: Ignoring "noauto" option for root device
	[  +0.216331] systemd-fstab-generator[995]: Ignoring "noauto" option for root device
	[  +0.260985] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[  +2.876807] systemd-fstab-generator[1179]: Ignoring "noauto" option for root device
	[  +0.212935] systemd-fstab-generator[1191]: Ignoring "noauto" option for root device
	[  +0.227831] systemd-fstab-generator[1203]: Ignoring "noauto" option for root device
	[  +0.311128] systemd-fstab-generator[1218]: Ignoring "noauto" option for root device
	[Apr15 18:22] systemd-fstab-generator[1312]: Ignoring "noauto" option for root device
	[  +0.114802] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.164512] systemd-fstab-generator[1517]: Ignoring "noauto" option for root device
	[  +7.677617] systemd-fstab-generator[1722]: Ignoring "noauto" option for root device
	[  +0.108322] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.774902] kauditd_printk_skb: 67 callbacks suppressed
	[  +5.244487] systemd-fstab-generator[2220]: Ignoring "noauto" option for root device
	[ +14.155639] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.279744] kauditd_printk_skb: 29 callbacks suppressed
	[Apr15 18:27] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [a0fa6c17de39] <==
	{"level":"info","ts":"2024-04-15T18:42:17.443797Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2021,"took":"11.390416ms","hash":1421491769,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1802240,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-04-15T18:42:17.443837Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1421491769,"revision":2021,"compact-revision":1485}
	{"level":"info","ts":"2024-04-15T18:43:33.518295Z","caller":"traceutil/trace.go:171","msg":"trace[443010453] transaction","detail":"{read_only:false; response_revision:2695; number_of_response:1; }","duration":"266.440557ms","start":"2024-04-15T18:43:33.251827Z","end":"2024-04-15T18:43:33.518267Z","steps":["trace[443010453] 'process raft request'  (duration: 266.058657ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:34.010261Z","caller":"traceutil/trace.go:171","msg":"trace[907999705] linearizableReadLoop","detail":"{readStateIndex:2969; appliedIndex:2968; }","duration":"127.140571ms","start":"2024-04-15T18:43:33.882928Z","end":"2024-04-15T18:43:34.010069Z","steps":["trace[907999705] 'read index received'  (duration: 126.931871ms)","trace[907999705] 'applied index is now lower than readState.Index'  (duration: 208.1µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-15T18:43:34.010558Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.698771ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-15T18:43:34.01062Z","caller":"traceutil/trace.go:171","msg":"trace[1177634589] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2696; }","duration":"127.798872ms","start":"2024-04-15T18:43:33.882811Z","end":"2024-04-15T18:43:34.01061Z","steps":["trace[1177634589] 'agreement among raft nodes before linearized reading'  (duration: 127.591271ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:34.010373Z","caller":"traceutil/trace.go:171","msg":"trace[320563100] transaction","detail":"{read_only:false; response_revision:2696; number_of_response:1; }","duration":"232.738612ms","start":"2024-04-15T18:43:33.777617Z","end":"2024-04-15T18:43:34.010356Z","steps":["trace[320563100] 'process raft request'  (duration: 232.232111ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:34.712206Z","caller":"traceutil/trace.go:171","msg":"trace[711121958] transaction","detail":"{read_only:false; response_revision:2697; number_of_response:1; }","duration":"181.877144ms","start":"2024-04-15T18:43:34.530256Z","end":"2024-04-15T18:43:34.712133Z","steps":["trace[711121958] 'process raft request'  (duration: 181.582843ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:46.38138Z","caller":"traceutil/trace.go:171","msg":"trace[31759011] transaction","detail":"{read_only:false; response_revision:2751; number_of_response:1; }","duration":"240.29982ms","start":"2024-04-15T18:43:46.141059Z","end":"2024-04-15T18:43:46.381359Z","steps":["trace[31759011] 'process raft request'  (duration: 230.957808ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:52.283476Z","caller":"traceutil/trace.go:171","msg":"trace[1889997840] linearizableReadLoop","detail":"{readStateIndex:3044; appliedIndex:3043; }","duration":"110.447446ms","start":"2024-04-15T18:43:52.17301Z","end":"2024-04-15T18:43:52.283458Z","steps":["trace[1889997840] 'read index received'  (duration: 110.306146ms)","trace[1889997840] 'applied index is now lower than readState.Index'  (duration: 140.7µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-15T18:43:52.283605Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.572946ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/172.19.63.147\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-04-15T18:43:52.283637Z","caller":"traceutil/trace.go:171","msg":"trace[10951613] range","detail":"{range_begin:/registry/masterleases/172.19.63.147; range_end:; response_count:1; response_revision:2766; }","duration":"110.637847ms","start":"2024-04-15T18:43:52.17299Z","end":"2024-04-15T18:43:52.283628Z","steps":["trace[10951613] 'agreement among raft nodes before linearized reading'  (duration: 110.561546ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:52.283903Z","caller":"traceutil/trace.go:171","msg":"trace[1508396561] transaction","detail":"{read_only:false; response_revision:2766; number_of_response:1; }","duration":"114.807253ms","start":"2024-04-15T18:43:52.169084Z","end":"2024-04-15T18:43:52.283892Z","steps":["trace[1508396561] 'process raft request'  (duration: 114.280152ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T18:43:52.666427Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.266757ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14279945624074152814 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:462c8ee2fed56b6d>","response":"size:41"}
	{"level":"info","ts":"2024-04-15T18:43:52.667005Z","caller":"traceutil/trace.go:171","msg":"trace[1721457016] linearizableReadLoop","detail":"{readStateIndex:3045; appliedIndex:3044; }","duration":"237.167715ms","start":"2024-04-15T18:43:52.429394Z","end":"2024-04-15T18:43:52.666562Z","steps":["trace[1721457016] 'read index received'  (duration: 43.566658ms)","trace[1721457016] 'applied index is now lower than readState.Index'  (duration: 193.598957ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-15T18:43:52.667407Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T18:43:52.285771Z","time spent":"381.633207ms","remote":"127.0.0.1:45166","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2024-04-15T18:43:52.66813Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"230.156306ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1110"}
	{"level":"warn","ts":"2024-04-15T18:43:52.66875Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"239.352418ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-653100-m03\" ","response":"range_response_count:1 size:3120"}
	{"level":"info","ts":"2024-04-15T18:43:52.668806Z","caller":"traceutil/trace.go:171","msg":"trace[2016319950] range","detail":"{range_begin:/registry/minions/ha-653100-m03; range_end:; response_count:1; response_revision:2766; }","duration":"239.433618ms","start":"2024-04-15T18:43:52.429363Z","end":"2024-04-15T18:43:52.668797Z","steps":["trace[2016319950] 'agreement among raft nodes before linearized reading'  (duration: 239.350018ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:52.668276Z","caller":"traceutil/trace.go:171","msg":"trace[416735202] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2766; }","duration":"230.317306ms","start":"2024-04-15T18:43:52.437947Z","end":"2024-04-15T18:43:52.668265Z","steps":["trace[416735202] 'agreement among raft nodes before linearized reading'  (duration: 230.123406ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:52.788795Z","caller":"traceutil/trace.go:171","msg":"trace[396711083] transaction","detail":"{read_only:false; response_revision:2768; number_of_response:1; }","duration":"109.505445ms","start":"2024-04-15T18:43:52.679272Z","end":"2024-04-15T18:43:52.788777Z","steps":["trace[396711083] 'process raft request'  (duration: 102.216136ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:43:57.928969Z","caller":"traceutil/trace.go:171","msg":"trace[2037072140] transaction","detail":"{read_only:false; response_revision:2782; number_of_response:1; }","duration":"141.624188ms","start":"2024-04-15T18:43:57.787327Z","end":"2024-04-15T18:43:57.928951Z","steps":["trace[2037072140] 'process raft request'  (duration: 141.090687ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:47:17.452824Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2557}
	{"level":"info","ts":"2024-04-15T18:47:17.463722Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2557,"took":"10.434855ms","hash":4268225983,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1937408,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2024-04-15T18:47:17.464008Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4268225983,"revision":2557,"compact-revision":2021}
	
	
	==> kernel <==
	 18:49:25 up 29 min,  0 users,  load average: 0.55, 0.42, 0.32
	Linux ha-653100 5.10.207 #1 SMP Mon Apr 15 15:01:07 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8533539a42fc] <==
	I0415 18:48:17.898624       1 main.go:250] Node ha-653100-m03 has CIDR [10.244.1.0/24] 
	I0415 18:48:27.905624       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:48:27.905735       1 main.go:227] handling current node
	I0415 18:48:27.905750       1 main.go:223] Handling node with IPs: map[172.19.51.108:{}]
	I0415 18:48:27.905760       1 main.go:250] Node ha-653100-m03 has CIDR [10.244.1.0/24] 
	I0415 18:48:37.915211       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:48:37.915327       1 main.go:227] handling current node
	I0415 18:48:37.915344       1 main.go:223] Handling node with IPs: map[172.19.51.108:{}]
	I0415 18:48:37.915354       1 main.go:250] Node ha-653100-m03 has CIDR [10.244.1.0/24] 
	I0415 18:48:47.926086       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:48:47.926337       1 main.go:227] handling current node
	I0415 18:48:47.926357       1 main.go:223] Handling node with IPs: map[172.19.51.108:{}]
	I0415 18:48:47.926421       1 main.go:250] Node ha-653100-m03 has CIDR [10.244.1.0/24] 
	I0415 18:48:57.942137       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:48:57.942308       1 main.go:227] handling current node
	I0415 18:48:57.942324       1 main.go:223] Handling node with IPs: map[172.19.51.108:{}]
	I0415 18:48:57.942333       1 main.go:250] Node ha-653100-m03 has CIDR [10.244.1.0/24] 
	I0415 18:49:07.958127       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:49:07.958209       1 main.go:227] handling current node
	I0415 18:49:07.958223       1 main.go:223] Handling node with IPs: map[172.19.51.108:{}]
	I0415 18:49:07.958232       1 main.go:250] Node ha-653100-m03 has CIDR [10.244.1.0/24] 
	I0415 18:49:17.976358       1 main.go:223] Handling node with IPs: map[172.19.63.147:{}]
	I0415 18:49:17.976467       1 main.go:227] handling current node
	I0415 18:49:17.976484       1 main.go:223] Handling node with IPs: map[172.19.51.108:{}]
	I0415 18:49:17.976493       1 main.go:250] Node ha-653100-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [b7958fc0d30b] <==
	I0415 18:22:19.472339       1 autoregister_controller.go:141] Starting autoregister controller
	I0415 18:22:19.472452       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0415 18:22:19.472462       1 cache.go:39] Caches are synced for autoregister controller
	I0415 18:22:19.498049       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0415 18:22:19.510348       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0415 18:22:20.354035       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0415 18:22:20.363724       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0415 18:22:20.363838       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0415 18:22:21.763949       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0415 18:22:21.866542       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0415 18:22:22.100224       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0415 18:22:22.118571       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.63.147]
	I0415 18:22:22.120605       1 controller.go:624] quota admission added evaluator for: endpoints
	I0415 18:22:22.130952       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0415 18:22:22.385516       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0415 18:22:24.016138       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0415 18:22:24.048032       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0415 18:22:24.081226       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0415 18:22:36.868875       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0415 18:22:36.898745       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0415 18:43:52.764570       1 trace.go:236] Trace[705971869]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.19.63.147,type:*v1.Endpoints,resource:apiServerIPInfo (15-Apr-2024 18:43:52.172) (total time: 592ms):
	Trace[705971869]: ---"initial value restored" 112ms (18:43:52.284)
	Trace[705971869]: ---"Transaction prepared" 384ms (18:43:52.669)
	Trace[705971869]: ---"Txn call completed" 95ms (18:43:52.764)
	Trace[705971869]: [592.295387ms] [592.295387ms] END
	
	
	==> kube-controller-manager [a0697c56404b] <==
	I0415 18:22:52.288055       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="224.505µs"
	I0415 18:22:52.333123       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="22.028652ms"
	I0415 18:22:52.333675       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="426.909µs"
	I0415 18:27:05.738408       1 event.go:376] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7fdf7869d9 to 3"
	I0415 18:27:05.789870       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-5w5x4"
	I0415 18:27:05.841032       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-8pgjv"
	I0415 18:27:05.849328       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-tk6sh"
	I0415 18:27:05.899441       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="160.526833ms"
	I0415 18:27:05.957239       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="57.716604ms"
	I0415 18:27:05.998341       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="40.497733ms"
	I0415 18:27:05.998579       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="95.3µs"
	I0415 18:27:09.211983       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="14.949061ms"
	I0415 18:27:09.212464       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="29.3µs"
	I0415 18:43:41.144789       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-653100-m03\" does not exist"
	I0415 18:43:41.155868       1 range_allocator.go:380] "Set node PodCIDR" node="ha-653100-m03" podCIDRs=["10.244.1.0/24"]
	I0415 18:43:41.176203       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rtbf9"
	I0415 18:43:41.176231       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-kvnct"
	I0415 18:43:42.027348       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-653100-m03"
	I0415 18:43:42.028227       1 event.go:376] "Event occurred" object="ha-653100-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-653100-m03 event: Registered Node ha-653100-m03 in Controller"
	I0415 18:44:02.707914       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-653100-m03"
	I0415 18:47:23.860508       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="123.902µs"
	I0415 18:47:23.861316       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="118.402µs"
	I0415 18:47:23.883423       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="69.901µs"
	I0415 18:47:26.732778       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="12.756987ms"
	I0415 18:47:26.733757       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="156.003µs"
	
	
	==> kube-proxy [ece5eb28b20b] <==
	I0415 18:22:38.391716       1 server_others.go:72] "Using iptables proxy"
	I0415 18:22:38.407680       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.63.147"]
	I0415 18:22:38.495319       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0415 18:22:38.495346       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0415 18:22:38.495361       1 server_others.go:168] "Using iptables Proxier"
	I0415 18:22:38.500785       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0415 18:22:38.501443       1 server.go:865] "Version info" version="v1.29.3"
	I0415 18:22:38.501468       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0415 18:22:38.503945       1 config.go:188] "Starting service config controller"
	I0415 18:22:38.504041       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0415 18:22:38.504268       1 config.go:97] "Starting endpoint slice config controller"
	I0415 18:22:38.504770       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0415 18:22:38.505829       1 config.go:315] "Starting node config controller"
	I0415 18:22:38.507970       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0415 18:22:38.605316       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0415 18:22:38.605583       1 shared_informer.go:318] Caches are synced for service config
	I0415 18:22:38.608238       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [d68da55f0f38] <==
	W0415 18:22:20.533571       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0415 18:22:20.533671       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0415 18:22:20.559089       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0415 18:22:20.559148       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0415 18:22:20.566941       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0415 18:22:20.569271       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0415 18:22:20.649432       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0415 18:22:20.649545       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0415 18:22:20.680518       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0415 18:22:20.681133       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0415 18:22:20.703015       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0415 18:22:20.703474       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0415 18:22:20.766338       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0415 18:22:20.766458       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0415 18:22:20.789649       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0415 18:22:20.790593       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0415 18:22:20.803334       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0415 18:22:20.804054       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0415 18:22:20.808728       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0415 18:22:20.809130       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0415 18:22:20.838937       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0415 18:22:20.841219       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0415 18:22:20.865287       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0415 18:22:20.865345       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0415 18:22:22.187395       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 15 18:45:24 ha-653100 kubelet[2226]: E0415 18:45:24.245433    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:45:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:45:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:45:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:45:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 18:46:24 ha-653100 kubelet[2226]: E0415 18:46:24.244695    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:46:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:46:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:46:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:46:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 18:47:24 ha-653100 kubelet[2226]: E0415 18:47:24.244011    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:47:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:47:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:47:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:47:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 18:48:24 ha-653100 kubelet[2226]: E0415 18:48:24.245141    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:48:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:48:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:48:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:48:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 18:49:24 ha-653100 kubelet[2226]: E0415 18:49:24.244283    2226 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 18:49:24 ha-653100 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 18:49:24 ha-653100 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 18:49:24 ha-653100 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 18:49:24 ha-653100 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:49:16.633175    2960 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-653100 -n ha-653100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-653100 -n ha-653100: (13.0240444s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-653100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7fdf7869d9-tk6sh
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-653100 describe pod busybox-7fdf7869d9-tk6sh
helpers_test.go:282: (dbg) kubectl --context ha-653100 describe pod busybox-7fdf7869d9-tk6sh:

                                                
                                                
-- stdout --
	Name:             busybox-7fdf7869d9-tk6sh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7fdf7869d9
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7fdf7869d9
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rjshx (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-rjshx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  7m16s (x4 over 22m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  2m16s                default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (127.38s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (59.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-841000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-841000 -- exec busybox-7fdf7869d9-gkn8h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-841000 -- exec busybox-7fdf7869d9-gkn8h -- sh -c "ping -c 1 172.19.48.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-841000 -- exec busybox-7fdf7869d9-gkn8h -- sh -c "ping -c 1 172.19.48.1": exit status 1 (10.5559404s)

                                                
                                                
-- stdout --
	PING 172.19.48.1 (172.19.48.1): 56 data bytes
	
	--- 172.19.48.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 19:29:21.275303    5328 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.19.48.1) from pod (busybox-7fdf7869d9-gkn8h): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-841000 -- exec busybox-7fdf7869d9-hfpk6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-841000 -- exec busybox-7fdf7869d9-hfpk6 -- sh -c "ping -c 1 172.19.48.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-841000 -- exec busybox-7fdf7869d9-hfpk6 -- sh -c "ping -c 1 172.19.48.1": exit status 1 (10.5470735s)

                                                
                                                
-- stdout --
	PING 172.19.48.1 (172.19.48.1): 56 data bytes
	
	--- 172.19.48.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 19:29:32.416313   12052 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.19.48.1) from pod (busybox-7fdf7869d9-hfpk6): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-841000 -n multinode-841000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-841000 -n multinode-841000: (13.0711886s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 logs -n 25: (9.0935907s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| ssh     | mount-start-2-235400 ssh -- ls                    | mount-start-2-235400 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:17 UTC | 15 Apr 24 19:17 UTC |
	|         | /minikube-host                                    |                      |                   |                |                     |                     |
	| delete  | -p mount-start-1-235400                           | mount-start-1-235400 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:17 UTC | 15 Apr 24 19:18 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |                |                     |                     |
	| ssh     | mount-start-2-235400 ssh -- ls                    | mount-start-2-235400 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:18 UTC | 15 Apr 24 19:18 UTC |
	|         | /minikube-host                                    |                      |                   |                |                     |                     |
	| stop    | -p mount-start-2-235400                           | mount-start-2-235400 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:18 UTC | 15 Apr 24 19:18 UTC |
	| start   | -p mount-start-2-235400                           | mount-start-2-235400 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:18 UTC | 15 Apr 24 19:20 UTC |
	| mount   | C:\Users\jenkins.minikube6:/minikube-host         | mount-start-2-235400 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:20 UTC |                     |
	|         | --profile mount-start-2-235400 --v 0              |                      |                   |                |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |                |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |                |                     |                     |
	|         |                                                 0 |                      |                   |                |                     |                     |
	| ssh     | mount-start-2-235400 ssh -- ls                    | mount-start-2-235400 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:20 UTC | 15 Apr 24 19:21 UTC |
	|         | /minikube-host                                    |                      |                   |                |                     |                     |
	| delete  | -p mount-start-2-235400                           | mount-start-2-235400 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:21 UTC | 15 Apr 24 19:21 UTC |
	| delete  | -p mount-start-1-235400                           | mount-start-1-235400 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:21 UTC | 15 Apr 24 19:21 UTC |
	| start   | -p multinode-841000                               | multinode-841000     | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:21 UTC | 15 Apr 24 19:28 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |                |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |                |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |                |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |                |                     |                     |
	| kubectl | -p multinode-841000 -- apply -f                   | multinode-841000     | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:29 UTC | 15 Apr 24 19:29 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |                |                     |                     |
	| kubectl | -p multinode-841000 -- rollout                    | multinode-841000     | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:29 UTC | 15 Apr 24 19:29 UTC |
	|         | status deployment/busybox                         |                      |                   |                |                     |                     |
	| kubectl | -p multinode-841000 -- get pods -o                | multinode-841000     | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:29 UTC | 15 Apr 24 19:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |                |                     |                     |
	| kubectl | -p multinode-841000 -- get pods -o                | multinode-841000     | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:29 UTC | 15 Apr 24 19:29 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |                |                     |                     |
	| kubectl | -p multinode-841000 -- exec                       | multinode-841000     | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:29 UTC | 15 Apr 24 19:29 UTC |
	|         | busybox-7fdf7869d9-gkn8h --                       |                      |                   |                |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |                |                     |                     |
	| kubectl | -p multinode-841000 -- exec                       | multinode-841000     | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:29 UTC | 15 Apr 24 19:29 UTC |
	|         | busybox-7fdf7869d9-hfpk6 --                       |                      |                   |                |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |                |                     |                     |
	| kubectl | -p multinode-841000 -- exec                       | multinode-841000     | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:29 UTC | 15 Apr 24 19:29 UTC |
	|         | busybox-7fdf7869d9-gkn8h --                       |                      |                   |                |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |                |                     |                     |
	| kubectl | -p multinode-841000 -- exec                       | multinode-841000     | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:29 UTC | 15 Apr 24 19:29 UTC |
	|         | busybox-7fdf7869d9-hfpk6 --                       |                      |                   |                |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |                |                     |                     |
	| kubectl | -p multinode-841000 -- exec                       | multinode-841000     | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:29 UTC | 15 Apr 24 19:29 UTC |
	|         | busybox-7fdf7869d9-gkn8h -- nslookup              |                      |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |                |                     |                     |
	| kubectl | -p multinode-841000 -- exec                       | multinode-841000     | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:29 UTC | 15 Apr 24 19:29 UTC |
	|         | busybox-7fdf7869d9-hfpk6 -- nslookup              |                      |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |                |                     |                     |
	| kubectl | -p multinode-841000 -- get pods -o                | multinode-841000     | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:29 UTC | 15 Apr 24 19:29 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |                |                     |                     |
	| kubectl | -p multinode-841000 -- exec                       | multinode-841000     | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:29 UTC | 15 Apr 24 19:29 UTC |
	|         | busybox-7fdf7869d9-gkn8h                          |                      |                   |                |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |                |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |                |                     |                     |
	| kubectl | -p multinode-841000 -- exec                       | multinode-841000     | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:29 UTC |                     |
	|         | busybox-7fdf7869d9-gkn8h -- sh                    |                      |                   |                |                     |                     |
	|         | -c ping -c 1 172.19.48.1                          |                      |                   |                |                     |                     |
	| kubectl | -p multinode-841000 -- exec                       | multinode-841000     | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:29 UTC | 15 Apr 24 19:29 UTC |
	|         | busybox-7fdf7869d9-hfpk6                          |                      |                   |                |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |                |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |                |                     |                     |
	| kubectl | -p multinode-841000 -- exec                       | multinode-841000     | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:29 UTC |                     |
	|         | busybox-7fdf7869d9-hfpk6 -- sh                    |                      |                   |                |                     |                     |
	|         | -c ping -c 1 172.19.48.1                          |                      |                   |                |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 19:21:40
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 19:21:40.060634    2716 out.go:291] Setting OutFile to fd 796 ...
	I0415 19:21:40.061212    2716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 19:21:40.061212    2716 out.go:304] Setting ErrFile to fd 656...
	I0415 19:21:40.061212    2716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 19:21:40.085368    2716 out.go:298] Setting JSON to false
	I0415 19:21:40.088968    2716 start.go:129] hostinfo: {"hostname":"minikube6","uptime":20626,"bootTime":1713188273,"procs":186,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0415 19:21:40.088968    2716 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 19:21:40.093025    2716 out.go:177] * [multinode-841000] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0415 19:21:40.100019    2716 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 19:21:40.100019    2716 notify.go:220] Checking for updates...
	I0415 19:21:40.103009    2716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 19:21:40.105581    2716 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0415 19:21:40.109842    2716 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 19:21:40.112764    2716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 19:21:40.115792    2716 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 19:21:45.911983    2716 out.go:177] * Using the hyperv driver based on user configuration
	I0415 19:21:45.915263    2716 start.go:297] selected driver: hyperv
	I0415 19:21:45.915263    2716 start.go:901] validating driver "hyperv" against <nil>
	I0415 19:21:45.915263    2716 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 19:21:45.972261    2716 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 19:21:45.973671    2716 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 19:21:45.973671    2716 cni.go:84] Creating CNI manager for ""
	I0415 19:21:45.973671    2716 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0415 19:21:45.973671    2716 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0415 19:21:45.973671    2716 start.go:340] cluster config:
	{Name:multinode-841000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 19:21:45.974333    2716 iso.go:125] acquiring lock: {Name:mkb11aac800c033551a31c7a773c0461f92e4459 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 19:21:45.978465    2716 out.go:177] * Starting "multinode-841000" primary control-plane node in "multinode-841000" cluster
	I0415 19:21:45.981272    2716 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 19:21:45.981272    2716 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 19:21:45.981272    2716 cache.go:56] Caching tarball of preloaded images
	I0415 19:21:45.981781    2716 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 19:21:45.982093    2716 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 19:21:45.982275    2716 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\config.json ...
	I0415 19:21:45.982275    2716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\config.json: {Name:mk417aea25697d9ce4f3bb1be1051fa880d1f409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 19:21:45.984073    2716 start.go:360] acquireMachinesLock for multinode-841000: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 19:21:45.984073    2716 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-841000"
	I0415 19:21:45.984506    2716 start.go:93] Provisioning new machine with config: &{Name:multinode-841000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.3 ClusterName:multinode-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 19:21:45.984506    2716 start.go:125] createHost starting for "" (driver="hyperv")
	I0415 19:21:45.989753    2716 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 19:21:45.989926    2716 start.go:159] libmachine.API.Create for "multinode-841000" (driver="hyperv")
	I0415 19:21:45.989926    2716 client.go:168] LocalClient.Create starting
	I0415 19:21:45.990713    2716 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0415 19:21:45.990868    2716 main.go:141] libmachine: Decoding PEM data...
	I0415 19:21:45.990868    2716 main.go:141] libmachine: Parsing certificate...
	I0415 19:21:45.990868    2716 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0415 19:21:45.991427    2716 main.go:141] libmachine: Decoding PEM data...
	I0415 19:21:45.991427    2716 main.go:141] libmachine: Parsing certificate...
	I0415 19:21:45.991606    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0415 19:21:48.237174    2716 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0415 19:21:48.237174    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:21:48.238273    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0415 19:21:50.082410    2716 main.go:141] libmachine: [stdout =====>] : False
	
	I0415 19:21:50.082410    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:21:50.083097    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 19:21:51.638692    2716 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 19:21:51.638692    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:21:51.638794    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 19:21:55.520384    2716 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 19:21:55.521108    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:21:55.523627    2716 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0415 19:21:56.104338    2716 main.go:141] libmachine: Creating SSH key...
	I0415 19:21:56.313160    2716 main.go:141] libmachine: Creating VM...
	I0415 19:21:56.313160    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 19:21:59.367792    2716 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 19:21:59.367792    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:21:59.367792    2716 main.go:141] libmachine: Using switch "Default Switch"
	I0415 19:21:59.368086    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 19:22:01.228599    2716 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 19:22:01.228693    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:01.228693    2716 main.go:141] libmachine: Creating VHD
	I0415 19:22:01.228755    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0415 19:22:05.263884    2716 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F2D30E75-2B2A-480A-A926-F1F120B4E376
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0415 19:22:05.263884    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:05.263884    2716 main.go:141] libmachine: Writing magic tar header
	I0415 19:22:05.264533    2716 main.go:141] libmachine: Writing SSH key tar header
	I0415 19:22:05.274223    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0415 19:22:08.613133    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:22:08.613133    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:08.613914    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000\disk.vhd' -SizeBytes 20000MB
	I0415 19:22:11.374881    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:22:11.375432    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:11.375572    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-841000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0415 19:22:15.262016    2716 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-841000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0415 19:22:15.262916    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:15.262916    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-841000 -DynamicMemoryEnabled $false
	I0415 19:22:17.715675    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:22:17.715892    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:17.715949    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-841000 -Count 2
	I0415 19:22:20.036849    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:22:20.037654    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:20.037752    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-841000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000\boot2docker.iso'
	I0415 19:22:22.799227    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:22:22.799227    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:22.799227    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-841000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000\disk.vhd'
	I0415 19:22:25.689574    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:22:25.689903    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:25.689903    2716 main.go:141] libmachine: Starting VM...
	I0415 19:22:25.689903    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-841000
	I0415 19:22:29.012810    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:22:29.012872    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:29.012982    2716 main.go:141] libmachine: Waiting for host to start...
	I0415 19:22:29.013108    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:22:31.456870    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:22:31.457067    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:31.457067    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:22:34.115184    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:22:34.115184    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:35.126466    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:22:37.431717    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:22:37.431717    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:37.432013    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:22:40.110261    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:22:40.110261    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:41.110897    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:22:43.526331    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:22:43.526664    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:43.526664    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:22:46.207371    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:22:46.207603    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:47.213986    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:22:49.558395    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:22:49.558395    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:49.558622    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:22:52.275773    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:22:52.276340    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:53.277303    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:22:55.677874    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:22:55.677874    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:55.678677    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:22:58.430305    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:22:58.431267    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:58.431472    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:23:00.717245    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:23:00.717245    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:00.717245    2716 machine.go:94] provisionDockerMachine start ...
	I0415 19:23:00.717831    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:23:03.097790    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:23:03.097790    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:03.098497    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:23:05.862158    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:23:05.862158    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:05.873856    2716 main.go:141] libmachine: Using SSH client type: native
	I0415 19:23:05.885689    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.62.237 22 <nil> <nil>}
	I0415 19:23:05.885689    2716 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 19:23:06.011608    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0415 19:23:06.011608    2716 buildroot.go:166] provisioning hostname "multinode-841000"
	I0415 19:23:06.011608    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:23:08.296656    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:23:08.296656    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:08.296751    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:23:10.992939    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:23:10.993096    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:10.999681    2716 main.go:141] libmachine: Using SSH client type: native
	I0415 19:23:11.000892    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.62.237 22 <nil> <nil>}
	I0415 19:23:11.000960    2716 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-841000 && echo "multinode-841000" | sudo tee /etc/hostname
	I0415 19:23:11.157927    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-841000
	
	I0415 19:23:11.157927    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:23:13.476624    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:23:13.476624    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:13.476624    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:23:16.188133    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:23:16.188197    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:16.194137    2716 main.go:141] libmachine: Using SSH client type: native
	I0415 19:23:16.194449    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.62.237 22 <nil> <nil>}
	I0415 19:23:16.194449    2716 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-841000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-841000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-841000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 19:23:16.333414    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 19:23:16.333414    2716 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0415 19:23:16.333414    2716 buildroot.go:174] setting up certificates
	I0415 19:23:16.333414    2716 provision.go:84] configureAuth start
	I0415 19:23:16.333414    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:23:18.634486    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:23:18.634486    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:18.634486    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:23:21.373180    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:23:21.373180    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:21.373977    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:23:23.669852    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:23:23.669852    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:23.669852    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:23:26.429688    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:23:26.429688    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:26.430626    2716 provision.go:143] copyHostCerts
	I0415 19:23:26.430827    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0415 19:23:26.430880    2716 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0415 19:23:26.430880    2716 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0415 19:23:26.431617    2716 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0415 19:23:26.432626    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0415 19:23:26.432626    2716 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0415 19:23:26.432626    2716 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0415 19:23:26.433375    2716 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0415 19:23:26.434661    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0415 19:23:26.434661    2716 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0415 19:23:26.435196    2716 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0415 19:23:26.435482    2716 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0415 19:23:26.436191    2716 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-841000 san=[127.0.0.1 172.19.62.237 localhost minikube multinode-841000]
	I0415 19:23:26.606364    2716 provision.go:177] copyRemoteCerts
	I0415 19:23:26.624566    2716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 19:23:26.624751    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:23:28.904941    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:23:28.904941    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:28.905164    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:23:31.617898    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:23:31.618859    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:31.619364    2716 sshutil.go:53] new ssh client: &{IP:172.19.62.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000\id_rsa Username:docker}
	I0415 19:23:31.734873    2716 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1102236s)
	I0415 19:23:31.734873    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0415 19:23:31.735397    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0415 19:23:31.782254    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0415 19:23:31.782254    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0415 19:23:31.833786    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0415 19:23:31.834213    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 19:23:31.886045    2716 provision.go:87] duration metric: took 15.5525044s to configureAuth
	I0415 19:23:31.886045    2716 buildroot.go:189] setting minikube options for container-runtime
	I0415 19:23:31.886045    2716 config.go:182] Loaded profile config "multinode-841000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 19:23:31.886045    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:23:34.173666    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:23:34.173666    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:34.174196    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:23:36.901568    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:23:36.901568    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:36.908454    2716 main.go:141] libmachine: Using SSH client type: native
	I0415 19:23:36.909009    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.62.237 22 <nil> <nil>}
	I0415 19:23:36.909009    2716 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 19:23:37.043553    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0415 19:23:37.043553    2716 buildroot.go:70] root file system type: tmpfs
	I0415 19:23:37.044795    2716 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 19:23:37.044853    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:23:39.390858    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:23:39.390858    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:39.390858    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:23:42.117119    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:23:42.117119    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:42.123747    2716 main.go:141] libmachine: Using SSH client type: native
	I0415 19:23:42.124412    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.62.237 22 <nil> <nil>}
	I0415 19:23:42.124412    2716 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 19:23:42.288192    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 19:23:42.288192    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:23:44.574143    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:23:44.574223    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:44.574223    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:23:47.288812    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:23:47.288901    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:47.296301    2716 main.go:141] libmachine: Using SSH client type: native
	I0415 19:23:47.296301    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.62.237 22 <nil> <nil>}
	I0415 19:23:47.296843    2716 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 19:23:49.504243    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0415 19:23:49.504366    2716 machine.go:97] duration metric: took 48.7867254s to provisionDockerMachine
	I0415 19:23:49.504366    2716 client.go:171] duration metric: took 2m3.5134387s to LocalClient.Create
	I0415 19:23:49.504470    2716 start.go:167] duration metric: took 2m3.5135432s to libmachine.API.Create "multinode-841000"
	I0415 19:23:49.504470    2716 start.go:293] postStartSetup for "multinode-841000" (driver="hyperv")
	I0415 19:23:49.504470    2716 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 19:23:49.520859    2716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 19:23:49.520859    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:23:51.801117    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:23:51.801117    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:51.801588    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:23:54.521952    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:23:54.522967    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:54.523203    2716 sshutil.go:53] new ssh client: &{IP:172.19.62.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000\id_rsa Username:docker}
	I0415 19:23:54.623567    2716 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1026675s)
	I0415 19:23:54.637343    2716 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 19:23:54.644876    2716 command_runner.go:130] > NAME=Buildroot
	I0415 19:23:54.644876    2716 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0415 19:23:54.644876    2716 command_runner.go:130] > ID=buildroot
	I0415 19:23:54.644876    2716 command_runner.go:130] > VERSION_ID=2023.02.9
	I0415 19:23:54.644876    2716 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0415 19:23:54.644876    2716 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 19:23:54.644876    2716 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0415 19:23:54.644876    2716 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0415 19:23:54.645628    2716 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> 112722.pem in /etc/ssl/certs
	I0415 19:23:54.646220    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /etc/ssl/certs/112722.pem
	I0415 19:23:54.661264    2716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 19:23:54.682170    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /etc/ssl/certs/112722.pem (1708 bytes)
	I0415 19:23:54.734939    2716 start.go:296] duration metric: took 5.2304269s for postStartSetup
	I0415 19:23:54.737722    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:23:57.068109    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:23:57.068185    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:57.068185    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:23:59.799044    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:23:59.799946    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:59.800224    2716 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\config.json ...
	I0415 19:23:59.803102    2716 start.go:128] duration metric: took 2m13.817512s to createHost
	I0415 19:23:59.803276    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:24:02.096327    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:24:02.096327    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:24:02.096327    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:24:04.805308    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:24:04.806268    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:24:04.813051    2716 main.go:141] libmachine: Using SSH client type: native
	I0415 19:24:04.813129    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.62.237 22 <nil> <nil>}
	I0415 19:24:04.813129    2716 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0415 19:24:04.940901    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713209044.943326180
	
	I0415 19:24:04.940901    2716 fix.go:216] guest clock: 1713209044.943326180
	I0415 19:24:04.940901    2716 fix.go:229] Guest: 2024-04-15 19:24:04.94332618 +0000 UTC Remote: 2024-04-15 19:23:59.8032762 +0000 UTC m=+139.927639801 (delta=5.14004998s)
	I0415 19:24:04.940901    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:24:07.241084    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:24:07.241084    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:24:07.242015    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:24:09.986742    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:24:09.987361    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:24:09.995123    2716 main.go:141] libmachine: Using SSH client type: native
	I0415 19:24:09.995273    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.62.237 22 <nil> <nil>}
	I0415 19:24:09.995273    2716 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713209044
	I0415 19:24:10.135381    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 15 19:24:04 UTC 2024
	
	I0415 19:24:10.135381    2716 fix.go:236] clock set: Mon Apr 15 19:24:04 UTC 2024
	 (err=<nil>)
	I0415 19:24:10.135381    2716 start.go:83] releasing machines lock for "multinode-841000", held for 2m24.1501407s
	I0415 19:24:10.136902    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:24:12.459639    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:24:12.460633    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:24:12.460664    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:24:15.172259    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:24:15.173272    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:24:15.180412    2716 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 19:24:15.180987    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:24:15.190138    2716 ssh_runner.go:195] Run: cat /version.json
	I0415 19:24:15.190138    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:24:17.537066    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:24:17.537066    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:24:17.537066    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:24:17.537066    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:24:17.537066    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:24:17.537066    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:24:20.393652    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:24:20.393840    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:24:20.394376    2716 sshutil.go:53] new ssh client: &{IP:172.19.62.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000\id_rsa Username:docker}
	I0415 19:24:20.415460    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:24:20.415460    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:24:20.416459    2716 sshutil.go:53] new ssh client: &{IP:172.19.62.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000\id_rsa Username:docker}
	I0415 19:24:20.485351    2716 command_runner.go:130] > {"iso_version": "v1.33.0-1713175573-18634", "kicbase_version": "v0.0.43-1712854342-18621", "minikube_version": "v1.33.0-beta.0", "commit": "0ece0b4c602cbaab0821f0ba2d6ec4a07a392655"}
	I0415 19:24:20.486127    2716 ssh_runner.go:235] Completed: cat /version.json: (5.2959463s)
	I0415 19:24:20.502254    2716 ssh_runner.go:195] Run: systemctl --version
	I0415 19:24:20.616612    2716 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0415 19:24:20.617582    2716 command_runner.go:130] > systemd 252 (252)
	I0415 19:24:20.617639    2716 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.4371832s)
	I0415 19:24:20.617639    2716 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0415 19:24:20.632758    2716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0415 19:24:20.642157    2716 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0415 19:24:20.642777    2716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 19:24:20.657119    2716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 19:24:20.695026    2716 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0415 19:24:20.695026    2716 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 19:24:20.695026    2716 start.go:494] detecting cgroup driver to use...
	I0415 19:24:20.695434    2716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 19:24:20.737399    2716 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0415 19:24:20.753174    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0415 19:24:20.792906    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 19:24:20.818871    2716 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 19:24:20.832725    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 19:24:20.872440    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 19:24:20.910360    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 19:24:20.947615    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 19:24:20.986546    2716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 19:24:21.028398    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 19:24:21.065788    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 19:24:21.102214    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 19:24:21.139167    2716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 19:24:21.159689    2716 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0415 19:24:21.172969    2716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 19:24:21.220870    2716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:24:21.471026    2716 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 19:24:21.507347    2716 start.go:494] detecting cgroup driver to use...
	I0415 19:24:21.522730    2716 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0415 19:24:21.548976    2716 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0415 19:24:21.549019    2716 command_runner.go:130] > [Unit]
	I0415 19:24:21.549237    2716 command_runner.go:130] > Description=Docker Application Container Engine
	I0415 19:24:21.549237    2716 command_runner.go:130] > Documentation=https://docs.docker.com
	I0415 19:24:21.549318    2716 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0415 19:24:21.549318    2716 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0415 19:24:21.549318    2716 command_runner.go:130] > StartLimitBurst=3
	I0415 19:24:21.549351    2716 command_runner.go:130] > StartLimitIntervalSec=60
	I0415 19:24:21.549351    2716 command_runner.go:130] > [Service]
	I0415 19:24:21.549351    2716 command_runner.go:130] > Type=notify
	I0415 19:24:21.549351    2716 command_runner.go:130] > Restart=on-failure
	I0415 19:24:21.549394    2716 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0415 19:24:21.549394    2716 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0415 19:24:21.549394    2716 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0415 19:24:21.549438    2716 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0415 19:24:21.549438    2716 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0415 19:24:21.549438    2716 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0415 19:24:21.549438    2716 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0415 19:24:21.549482    2716 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0415 19:24:21.549514    2716 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0415 19:24:21.549514    2716 command_runner.go:130] > ExecStart=
	I0415 19:24:21.549514    2716 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0415 19:24:21.549566    2716 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0415 19:24:21.549566    2716 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0415 19:24:21.549599    2716 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0415 19:24:21.549599    2716 command_runner.go:130] > LimitNOFILE=infinity
	I0415 19:24:21.549599    2716 command_runner.go:130] > LimitNPROC=infinity
	I0415 19:24:21.549599    2716 command_runner.go:130] > LimitCORE=infinity
	I0415 19:24:21.549599    2716 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0415 19:24:21.549650    2716 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0415 19:24:21.549650    2716 command_runner.go:130] > TasksMax=infinity
	I0415 19:24:21.549759    2716 command_runner.go:130] > TimeoutStartSec=0
	I0415 19:24:21.549759    2716 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0415 19:24:21.549789    2716 command_runner.go:130] > Delegate=yes
	I0415 19:24:21.549789    2716 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0415 19:24:21.549789    2716 command_runner.go:130] > KillMode=process
	I0415 19:24:21.549789    2716 command_runner.go:130] > [Install]
	I0415 19:24:21.549789    2716 command_runner.go:130] > WantedBy=multi-user.target
	I0415 19:24:21.565085    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 19:24:21.604369    2716 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 19:24:21.664841    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 19:24:21.705768    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 19:24:21.749870    2716 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0415 19:24:21.828417    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 19:24:21.856107    2716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 19:24:21.900812    2716 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0415 19:24:21.917441    2716 ssh_runner.go:195] Run: which cri-dockerd
	I0415 19:24:21.922526    2716 command_runner.go:130] > /usr/bin/cri-dockerd
	I0415 19:24:21.942272    2716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0415 19:24:21.963109    2716 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0415 19:24:22.016256    2716 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0415 19:24:22.264137    2716 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0415 19:24:22.471570    2716 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0415 19:24:22.471791    2716 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0415 19:24:22.521250    2716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:24:22.753115    2716 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 19:24:25.368075    2716 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6149387s)
	I0415 19:24:25.383895    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0415 19:24:25.430295    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 19:24:25.470646    2716 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0415 19:24:25.720400    2716 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0415 19:24:25.955694    2716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:24:26.170379    2716 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0415 19:24:26.222387    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 19:24:26.266389    2716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:24:26.484304    2716 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0415 19:24:26.617534    2716 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0415 19:24:26.632318    2716 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0415 19:24:26.645257    2716 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0415 19:24:26.645347    2716 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0415 19:24:26.645386    2716 command_runner.go:130] > Device: 0,22	Inode: 883         Links: 1
	I0415 19:24:26.645386    2716 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0415 19:24:26.645427    2716 command_runner.go:130] > Access: 2024-04-15 19:24:26.512175974 +0000
	I0415 19:24:26.645427    2716 command_runner.go:130] > Modify: 2024-04-15 19:24:26.512175974 +0000
	I0415 19:24:26.645427    2716 command_runner.go:130] > Change: 2024-04-15 19:24:26.518175974 +0000
	I0415 19:24:26.645486    2716 command_runner.go:130] >  Birth: -
	I0415 19:24:26.645516    2716 start.go:562] Will wait 60s for crictl version
	I0415 19:24:26.660095    2716 ssh_runner.go:195] Run: which crictl
	I0415 19:24:26.666687    2716 command_runner.go:130] > /usr/bin/crictl
	I0415 19:24:26.682347    2716 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0415 19:24:26.740095    2716 command_runner.go:130] > Version:  0.1.0
	I0415 19:24:26.740252    2716 command_runner.go:130] > RuntimeName:  docker
	I0415 19:24:26.740252    2716 command_runner.go:130] > RuntimeVersion:  26.0.0
	I0415 19:24:26.740252    2716 command_runner.go:130] > RuntimeApiVersion:  v1
	I0415 19:24:26.740312    2716 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0415 19:24:26.752687    2716 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 19:24:26.784767    2716 command_runner.go:130] > 26.0.0
	I0415 19:24:26.795245    2716 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 19:24:26.831249    2716 command_runner.go:130] > 26.0.0
	I0415 19:24:26.835318    2716 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0415 19:24:26.835318    2716 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0415 19:24:26.839257    2716 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0415 19:24:26.839257    2716 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0415 19:24:26.839257    2716 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0415 19:24:26.839257    2716 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:d7:0e Flags:up|broadcast|multicast|running}
	I0415 19:24:26.842299    2716 ip.go:210] interface addr: fe80::6b0:6318:bc6e:fcda/64
	I0415 19:24:26.842299    2716 ip.go:210] interface addr: 172.19.48.1/20
	I0415 19:24:26.856293    2716 ssh_runner.go:195] Run: grep 172.19.48.1	host.minikube.internal$ /etc/hosts
	I0415 19:24:26.862967    2716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 19:24:26.886634    2716 kubeadm.go:877] updating cluster {Name:multinode-841000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.29.3 ClusterName:multinode-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.62.237 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0415 19:24:26.886634    2716 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 19:24:26.897311    2716 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 19:24:26.918394    2716 docker.go:685] Got preloaded images: 
	I0415 19:24:26.918394    2716 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0415 19:24:26.934063    2716 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0415 19:24:26.954847    2716 command_runner.go:139] > {"Repositories":{}}
	I0415 19:24:26.968826    2716 ssh_runner.go:195] Run: which lz4
	I0415 19:24:26.974832    2716 command_runner.go:130] > /usr/bin/lz4
	I0415 19:24:26.975257    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0415 19:24:26.990290    2716 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0415 19:24:26.996873    2716 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0415 19:24:26.997882    2716 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0415 19:24:26.997918    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0415 19:24:28.862867    2716 docker.go:649] duration metric: took 1.8872924s to copy over tarball
	I0415 19:24:28.876645    2716 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0415 19:24:38.097671    2716 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (9.2209517s)
	I0415 19:24:38.097807    2716 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0415 19:24:38.169465    2716 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0415 19:24:38.190723    2716 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.29.3":"sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533","registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c":"sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.29.3":"sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3","registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104":"sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.29.3":"sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392","registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863":"sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b
5bbe4f71784e392"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.29.3":"sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b","registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a":"sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0415 19:24:38.190723    2716 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0415 19:24:38.242447    2716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:24:38.473882    2716 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 19:24:41.419870    2716 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.9458914s)
	I0415 19:24:41.431373    2716 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 19:24:41.458820    2716 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.29.3
	I0415 19:24:41.459910    2716 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.29.3
	I0415 19:24:41.459946    2716 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.29.3
	I0415 19:24:41.459946    2716 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.29.3
	I0415 19:24:41.459946    2716 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0415 19:24:41.459946    2716 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0415 19:24:41.459983    2716 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0415 19:24:41.459983    2716 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 19:24:41.460057    2716 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0415 19:24:41.460057    2716 cache_images.go:84] Images are preloaded, skipping loading
	I0415 19:24:41.460128    2716 kubeadm.go:928] updating node { 172.19.62.237 8443 v1.29.3 docker true true} ...
	I0415 19:24:41.460249    2716 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-841000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.62.237
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0415 19:24:41.473102    2716 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0415 19:24:41.510548    2716 command_runner.go:130] > cgroupfs
	I0415 19:24:41.511686    2716 cni.go:84] Creating CNI manager for ""
	I0415 19:24:41.511686    2716 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 19:24:41.512255    2716 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0415 19:24:41.512337    2716 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.62.237 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-841000 NodeName:multinode-841000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.62.237"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.62.237 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0415 19:24:41.512422    2716 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.62.237
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-841000"
	  kubeletExtraArgs:
	    node-ip: 172.19.62.237
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.62.237"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0415 19:24:41.528710    2716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0415 19:24:41.548458    2716 command_runner.go:130] > kubeadm
	I0415 19:24:41.548458    2716 command_runner.go:130] > kubectl
	I0415 19:24:41.548458    2716 command_runner.go:130] > kubelet
	I0415 19:24:41.549449    2716 binaries.go:44] Found k8s binaries, skipping transfer
	I0415 19:24:41.563965    2716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0415 19:24:41.584446    2716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0415 19:24:41.620687    2716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0415 19:24:41.652366    2716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0415 19:24:41.698439    2716 ssh_runner.go:195] Run: grep 172.19.62.237	control-plane.minikube.internal$ /etc/hosts
	I0415 19:24:41.706003    2716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.62.237	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 19:24:41.741133    2716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:24:41.953381    2716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 19:24:41.984994    2716 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000 for IP: 172.19.62.237
	I0415 19:24:41.985165    2716 certs.go:194] generating shared ca certs ...
	I0415 19:24:41.985237    2716 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 19:24:41.985532    2716 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0415 19:24:41.986378    2716 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0415 19:24:41.986378    2716 certs.go:256] generating profile certs ...
	I0415 19:24:41.987364    2716 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\client.key
	I0415 19:24:41.987392    2716 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\client.crt with IP's: []
	I0415 19:24:42.229916    2716 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\client.crt ...
	I0415 19:24:42.229916    2716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\client.crt: {Name:mk9badea2ff5b569dc09e71a8f795bea7c9e1356 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 19:24:42.231015    2716 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\client.key ...
	I0415 19:24:42.231015    2716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\client.key: {Name:mke4cb8007f3a005256b61c64568ce8d40a62426 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 19:24:42.233103    2716 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\apiserver.key.54490593
	I0415 19:24:42.233103    2716 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\apiserver.crt.54490593 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.62.237]
	I0415 19:24:42.389686    2716 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\apiserver.crt.54490593 ...
	I0415 19:24:42.389686    2716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\apiserver.crt.54490593: {Name:mk6e140699b78be59c9bc5f199ee895595487b6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 19:24:42.390692    2716 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\apiserver.key.54490593 ...
	I0415 19:24:42.390692    2716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\apiserver.key.54490593: {Name:mk727d6acd2006bf70a4f4c8c4e152752ee2e9af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 19:24:42.391689    2716 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\apiserver.crt.54490593 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\apiserver.crt
	I0415 19:24:42.406617    2716 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\apiserver.key.54490593 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\apiserver.key
	I0415 19:24:42.407577    2716 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\proxy-client.key
	I0415 19:24:42.408589    2716 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\proxy-client.crt with IP's: []
	I0415 19:24:42.537552    2716 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\proxy-client.crt ...
	I0415 19:24:42.537552    2716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\proxy-client.crt: {Name:mkf3e1e5f690513401ff7fb344202eb4abdc6cd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 19:24:42.538558    2716 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\proxy-client.key ...
	I0415 19:24:42.538558    2716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\proxy-client.key: {Name:mke8ee9fca7dffdeb19815e1840285da7eb6d959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 19:24:42.540522    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0415 19:24:42.540748    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0415 19:24:42.540949    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0415 19:24:42.541087    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0415 19:24:42.541350    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0415 19:24:42.541532    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0415 19:24:42.541686    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0415 19:24:42.550928    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0415 19:24:42.551791    2716 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem (1338 bytes)
	W0415 19:24:42.552467    2716 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272_empty.pem, impossibly tiny 0 bytes
	I0415 19:24:42.552604    2716 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0415 19:24:42.552772    2716 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0415 19:24:42.553087    2716 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0415 19:24:42.553343    2716 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0415 19:24:42.553634    2716 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem (1708 bytes)
	I0415 19:24:42.553634    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0415 19:24:42.554245    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem -> /usr/share/ca-certificates/11272.pem
	I0415 19:24:42.554415    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /usr/share/ca-certificates/112722.pem
	I0415 19:24:42.554627    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0415 19:24:42.612957    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0415 19:24:42.669189    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0415 19:24:42.718283    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0415 19:24:42.769205    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0415 19:24:42.825934    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0415 19:24:42.878537    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0415 19:24:42.932734    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0415 19:24:42.983772    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0415 19:24:43.038842    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem --> /usr/share/ca-certificates/11272.pem (1338 bytes)
	I0415 19:24:43.093027    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /usr/share/ca-certificates/112722.pem (1708 bytes)
	I0415 19:24:43.146498    2716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0415 19:24:43.196794    2716 ssh_runner.go:195] Run: openssl version
	I0415 19:24:43.208699    2716 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0415 19:24:43.223855    2716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11272.pem && ln -fs /usr/share/ca-certificates/11272.pem /etc/ssl/certs/11272.pem"
	I0415 19:24:43.261989    2716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11272.pem
	I0415 19:24:43.271207    2716 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 15 17:58 /usr/share/ca-certificates/11272.pem
	I0415 19:24:43.271407    2716 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 17:58 /usr/share/ca-certificates/11272.pem
	I0415 19:24:43.286785    2716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11272.pem
	I0415 19:24:43.297352    2716 command_runner.go:130] > 51391683
	I0415 19:24:43.312704    2716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11272.pem /etc/ssl/certs/51391683.0"
	I0415 19:24:43.350586    2716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112722.pem && ln -fs /usr/share/ca-certificates/112722.pem /etc/ssl/certs/112722.pem"
	I0415 19:24:43.386359    2716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112722.pem
	I0415 19:24:43.393930    2716 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 15 17:58 /usr/share/ca-certificates/112722.pem
	I0415 19:24:43.394014    2716 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 17:58 /usr/share/ca-certificates/112722.pem
	I0415 19:24:43.407956    2716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112722.pem
	I0415 19:24:43.417519    2716 command_runner.go:130] > 3ec20f2e
	I0415 19:24:43.432861    2716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112722.pem /etc/ssl/certs/3ec20f2e.0"
	I0415 19:24:43.469609    2716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0415 19:24:43.504602    2716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0415 19:24:43.514294    2716 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 15 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I0415 19:24:43.514391    2716 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I0415 19:24:43.527936    2716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0415 19:24:43.538176    2716 command_runner.go:130] > b5213941
	I0415 19:24:43.552348    2716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0415 19:24:43.589245    2716 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 19:24:43.596157    2716 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0415 19:24:43.597158    2716 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0415 19:24:43.597330    2716 kubeadm.go:391] StartCluster: {Name:multinode-841000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
9.3 ClusterName:multinode-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.62.237 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 19:24:43.608544    2716 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0415 19:24:43.648717    2716 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0415 19:24:43.666142    2716 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0415 19:24:43.666142    2716 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0415 19:24:43.666142    2716 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0415 19:24:43.680266    2716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0415 19:24:43.712154    2716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0415 19:24:43.732157    2716 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0415 19:24:43.732157    2716 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0415 19:24:43.732157    2716 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0415 19:24:43.732157    2716 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0415 19:24:43.732157    2716 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0415 19:24:43.732157    2716 kubeadm.go:156] found existing configuration files:
	
	I0415 19:24:43.746148    2716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0415 19:24:43.769233    2716 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0415 19:24:43.769293    2716 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0415 19:24:43.785469    2716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0415 19:24:43.816137    2716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0415 19:24:43.840104    2716 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0415 19:24:43.841116    2716 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0415 19:24:43.854109    2716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0415 19:24:43.883630    2716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0415 19:24:43.898633    2716 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0415 19:24:43.898633    2716 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0415 19:24:43.910627    2716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0415 19:24:43.941576    2716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0415 19:24:43.962066    2716 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0415 19:24:43.962210    2716 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0415 19:24:43.978032    2716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0415 19:24:43.999043    2716 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0415 19:24:44.480901    2716 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0415 19:24:44.480901    2716 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0415 19:24:59.143055    2716 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0415 19:24:59.143171    2716 command_runner.go:130] > [init] Using Kubernetes version: v1.29.3
	I0415 19:24:59.143385    2716 command_runner.go:130] > [preflight] Running pre-flight checks
	I0415 19:24:59.143417    2716 kubeadm.go:309] [preflight] Running pre-flight checks
	I0415 19:24:59.143613    2716 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0415 19:24:59.143613    2716 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0415 19:24:59.143613    2716 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0415 19:24:59.143613    2716 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0415 19:24:59.143613    2716 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0415 19:24:59.143613    2716 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0415 19:24:59.143613    2716 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0415 19:24:59.150477    2716 out.go:204]   - Generating certificates and keys ...
	I0415 19:24:59.143613    2716 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0415 19:24:59.150477    2716 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0415 19:24:59.150477    2716 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0415 19:24:59.150477    2716 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0415 19:24:59.150477    2716 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0415 19:24:59.151072    2716 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0415 19:24:59.151072    2716 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0415 19:24:59.151072    2716 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0415 19:24:59.151072    2716 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0415 19:24:59.151072    2716 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0415 19:24:59.151072    2716 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0415 19:24:59.151072    2716 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0415 19:24:59.151072    2716 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0415 19:24:59.151609    2716 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0415 19:24:59.151726    2716 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0415 19:24:59.151812    2716 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-841000] and IPs [172.19.62.237 127.0.0.1 ::1]
	I0415 19:24:59.151812    2716 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-841000] and IPs [172.19.62.237 127.0.0.1 ::1]
	I0415 19:24:59.151812    2716 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0415 19:24:59.151812    2716 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0415 19:24:59.152342    2716 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-841000] and IPs [172.19.62.237 127.0.0.1 ::1]
	I0415 19:24:59.152342    2716 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-841000] and IPs [172.19.62.237 127.0.0.1 ::1]
	I0415 19:24:59.152492    2716 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0415 19:24:59.152533    2716 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0415 19:24:59.152533    2716 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0415 19:24:59.152533    2716 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0415 19:24:59.152533    2716 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0415 19:24:59.152533    2716 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0415 19:24:59.152533    2716 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0415 19:24:59.152533    2716 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0415 19:24:59.152533    2716 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0415 19:24:59.152533    2716 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0415 19:24:59.152533    2716 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0415 19:24:59.152533    2716 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0415 19:24:59.152533    2716 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0415 19:24:59.152533    2716 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0415 19:24:59.153615    2716 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0415 19:24:59.153677    2716 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0415 19:24:59.153862    2716 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0415 19:24:59.153862    2716 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0415 19:24:59.153905    2716 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0415 19:24:59.154170    2716 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0415 19:24:59.154303    2716 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0415 19:24:59.158501    2716 out.go:204]   - Booting up control plane ...
	I0415 19:24:59.154368    2716 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0415 19:24:59.158829    2716 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0415 19:24:59.158829    2716 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0415 19:24:59.158987    2716 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0415 19:24:59.158987    2716 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0415 19:24:59.159153    2716 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0415 19:24:59.159153    2716 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0415 19:24:59.159444    2716 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0415 19:24:59.159444    2716 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0415 19:24:59.159751    2716 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0415 19:24:59.159792    2716 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0415 19:24:59.159913    2716 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0415 19:24:59.159913    2716 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0415 19:24:59.160369    2716 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0415 19:24:59.160369    2716 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0415 19:24:59.160493    2716 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.509640 seconds
	I0415 19:24:59.160543    2716 kubeadm.go:309] [apiclient] All control plane components are healthy after 8.509640 seconds
	I0415 19:24:59.160689    2716 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0415 19:24:59.160689    2716 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0415 19:24:59.161035    2716 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0415 19:24:59.161035    2716 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0415 19:24:59.161220    2716 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0415 19:24:59.161270    2716 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0415 19:24:59.161710    2716 command_runner.go:130] > [mark-control-plane] Marking the node multinode-841000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0415 19:24:59.161710    2716 kubeadm.go:309] [mark-control-plane] Marking the node multinode-841000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0415 19:24:59.161837    2716 command_runner.go:130] > [bootstrap-token] Using token: j6rchv.u0yc33wyp2zsd69b
	I0415 19:24:59.161837    2716 kubeadm.go:309] [bootstrap-token] Using token: j6rchv.u0yc33wyp2zsd69b
	I0415 19:24:59.164964    2716 out.go:204]   - Configuring RBAC rules ...
	I0415 19:24:59.165049    2716 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0415 19:24:59.165049    2716 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0415 19:24:59.165049    2716 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0415 19:24:59.165049    2716 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0415 19:24:59.165643    2716 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0415 19:24:59.165643    2716 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0415 19:24:59.165963    2716 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0415 19:24:59.165963    2716 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0415 19:24:59.166210    2716 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0415 19:24:59.166274    2716 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0415 19:24:59.166404    2716 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0415 19:24:59.166404    2716 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0415 19:24:59.166404    2716 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0415 19:24:59.166404    2716 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0415 19:24:59.166404    2716 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0415 19:24:59.166404    2716 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0415 19:24:59.167057    2716 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0415 19:24:59.167057    2716 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0415 19:24:59.167057    2716 kubeadm.go:309] 
	I0415 19:24:59.167310    2716 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0415 19:24:59.167310    2716 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0415 19:24:59.167310    2716 kubeadm.go:309] 
	I0415 19:24:59.167310    2716 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0415 19:24:59.167310    2716 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0415 19:24:59.167310    2716 kubeadm.go:309] 
	I0415 19:24:59.167310    2716 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0415 19:24:59.167310    2716 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0415 19:24:59.167310    2716 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0415 19:24:59.167310    2716 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0415 19:24:59.167310    2716 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0415 19:24:59.167310    2716 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0415 19:24:59.167310    2716 kubeadm.go:309] 
	I0415 19:24:59.167310    2716 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0415 19:24:59.167310    2716 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0415 19:24:59.167310    2716 kubeadm.go:309] 
	I0415 19:24:59.167310    2716 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0415 19:24:59.167310    2716 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0415 19:24:59.167310    2716 kubeadm.go:309] 
	I0415 19:24:59.168523    2716 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0415 19:24:59.168585    2716 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0415 19:24:59.168585    2716 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0415 19:24:59.168585    2716 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0415 19:24:59.168585    2716 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0415 19:24:59.168585    2716 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0415 19:24:59.168585    2716 kubeadm.go:309] 
	I0415 19:24:59.168585    2716 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0415 19:24:59.169165    2716 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0415 19:24:59.169465    2716 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0415 19:24:59.169465    2716 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0415 19:24:59.169632    2716 kubeadm.go:309] 
	I0415 19:24:59.169804    2716 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token j6rchv.u0yc33wyp2zsd69b \
	I0415 19:24:59.169804    2716 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token j6rchv.u0yc33wyp2zsd69b \
	I0415 19:24:59.170258    2716 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:64433043496e1f3289bbcb278fdd01bdcea4f93c3a13feee16fa4e3db23f59c2 \
	I0415 19:24:59.170258    2716 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:64433043496e1f3289bbcb278fdd01bdcea4f93c3a13feee16fa4e3db23f59c2 \
	I0415 19:24:59.170421    2716 command_runner.go:130] > 	--control-plane 
	I0415 19:24:59.170421    2716 kubeadm.go:309] 	--control-plane 
	I0415 19:24:59.170421    2716 kubeadm.go:309] 
	I0415 19:24:59.170649    2716 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0415 19:24:59.170721    2716 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0415 19:24:59.170721    2716 kubeadm.go:309] 
	I0415 19:24:59.170917    2716 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token j6rchv.u0yc33wyp2zsd69b \
	I0415 19:24:59.170971    2716 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token j6rchv.u0yc33wyp2zsd69b \
	I0415 19:24:59.171192    2716 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:64433043496e1f3289bbcb278fdd01bdcea4f93c3a13feee16fa4e3db23f59c2 
	I0415 19:24:59.171247    2716 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:64433043496e1f3289bbcb278fdd01bdcea4f93c3a13feee16fa4e3db23f59c2 
	I0415 19:24:59.171301    2716 cni.go:84] Creating CNI manager for ""
	I0415 19:24:59.171301    2716 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 19:24:59.174631    2716 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0415 19:24:59.195921    2716 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0415 19:24:59.215514    2716 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0415 19:24:59.215514    2716 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0415 19:24:59.215514    2716 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0415 19:24:59.215514    2716 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0415 19:24:59.215514    2716 command_runner.go:130] > Access: 2024-04-15 19:22:55.200417200 +0000
	I0415 19:24:59.215514    2716 command_runner.go:130] > Modify: 2024-04-15 15:49:28.000000000 +0000
	I0415 19:24:59.215514    2716 command_runner.go:130] > Change: 2024-04-15 19:22:45.121000000 +0000
	I0415 19:24:59.215514    2716 command_runner.go:130] >  Birth: -
	I0415 19:24:59.215514    2716 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0415 19:24:59.215514    2716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0415 19:24:59.343662    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0415 19:24:59.991329    2716 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0415 19:24:59.991329    2716 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0415 19:24:59.991329    2716 command_runner.go:130] > serviceaccount/kindnet created
	I0415 19:24:59.991435    2716 command_runner.go:130] > daemonset.apps/kindnet created
	I0415 19:24:59.991493    2716 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0415 19:25:00.009041    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:00.009041    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-841000 minikube.k8s.io/updated_at=2024_04_15T19_24_59_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c minikube.k8s.io/name=multinode-841000 minikube.k8s.io/primary=true
	I0415 19:25:00.020479    2716 command_runner.go:130] > -16
	I0415 19:25:00.021009    2716 ops.go:34] apiserver oom_adj: -16
	I0415 19:25:00.248150    2716 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0415 19:25:00.248330    2716 command_runner.go:130] > node/multinode-841000 labeled
	I0415 19:25:00.262597    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:00.429221    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:00.765484    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:00.889898    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:01.267175    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:01.378983    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:01.772401    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:01.892267    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:02.270240    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:02.393761    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:02.775868    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:02.893256    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:03.265864    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:03.386288    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:03.762648    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:03.900208    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:04.267634    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:04.391779    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:04.771539    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:04.890716    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:05.274414    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:05.387287    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:05.774652    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:05.911595    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:06.277552    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:06.394174    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:06.778143    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:06.899343    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:07.266888    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:07.382375    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:07.768664    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:07.884965    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:08.271818    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:08.384778    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:08.778983    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:08.897051    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:09.277417    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:09.426507    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:09.764842    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:09.885394    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:10.273745    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:10.408407    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:10.761869    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:10.912098    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:11.274697    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:11.394198    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:11.762370    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:11.949404    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:12.271315    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:12.430297    2716 command_runner.go:130] > NAME      SECRETS   AGE
	I0415 19:25:12.431310    2716 command_runner.go:130] > default   0         0s
	I0415 19:25:12.431391    2716 kubeadm.go:1107] duration metric: took 12.4397971s to wait for elevateKubeSystemPrivileges
	W0415 19:25:12.431391    2716 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0415 19:25:12.431391    2716 kubeadm.go:393] duration metric: took 28.8338277s to StartCluster
	I0415 19:25:12.431391    2716 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 19:25:12.431753    2716 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 19:25:12.432993    2716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 19:25:12.434702    2716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0415 19:25:12.434804    2716 start.go:234] Will wait 6m0s for node &{Name: IP:172.19.62.237 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 19:25:12.435141    2716 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0415 19:25:12.437465    2716 addons.go:69] Setting storage-provisioner=true in profile "multinode-841000"
	I0415 19:25:12.435625    2716 config.go:182] Loaded profile config "multinode-841000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 19:25:12.437465    2716 out.go:177] * Verifying Kubernetes components...
	I0415 19:25:12.437594    2716 addons.go:234] Setting addon storage-provisioner=true in "multinode-841000"
	I0415 19:25:12.437624    2716 addons.go:69] Setting default-storageclass=true in profile "multinode-841000"
	I0415 19:25:12.441073    2716 host.go:66] Checking if "multinode-841000" exists ...
	I0415 19:25:12.441073    2716 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-841000"
	I0415 19:25:12.442465    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:25:12.442731    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:25:12.457069    2716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:25:12.720193    2716 command_runner.go:130] > apiVersion: v1
	I0415 19:25:12.721198    2716 command_runner.go:130] > data:
	I0415 19:25:12.721198    2716 command_runner.go:130] >   Corefile: |
	I0415 19:25:12.721198    2716 command_runner.go:130] >     .:53 {
	I0415 19:25:12.721198    2716 command_runner.go:130] >         errors
	I0415 19:25:12.721198    2716 command_runner.go:130] >         health {
	I0415 19:25:12.721198    2716 command_runner.go:130] >            lameduck 5s
	I0415 19:25:12.721198    2716 command_runner.go:130] >         }
	I0415 19:25:12.721198    2716 command_runner.go:130] >         ready
	I0415 19:25:12.721198    2716 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0415 19:25:12.721198    2716 command_runner.go:130] >            pods insecure
	I0415 19:25:12.721198    2716 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0415 19:25:12.721198    2716 command_runner.go:130] >            ttl 30
	I0415 19:25:12.721198    2716 command_runner.go:130] >         }
	I0415 19:25:12.721198    2716 command_runner.go:130] >         prometheus :9153
	I0415 19:25:12.721198    2716 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0415 19:25:12.721198    2716 command_runner.go:130] >            max_concurrent 1000
	I0415 19:25:12.721198    2716 command_runner.go:130] >         }
	I0415 19:25:12.721198    2716 command_runner.go:130] >         cache 30
	I0415 19:25:12.721198    2716 command_runner.go:130] >         loop
	I0415 19:25:12.721198    2716 command_runner.go:130] >         reload
	I0415 19:25:12.721198    2716 command_runner.go:130] >         loadbalance
	I0415 19:25:12.721198    2716 command_runner.go:130] >     }
	I0415 19:25:12.721198    2716 command_runner.go:130] > kind: ConfigMap
	I0415 19:25:12.721198    2716 command_runner.go:130] > metadata:
	I0415 19:25:12.721198    2716 command_runner.go:130] >   creationTimestamp: "2024-04-15T19:24:58Z"
	I0415 19:25:12.721198    2716 command_runner.go:130] >   name: coredns
	I0415 19:25:12.721198    2716 command_runner.go:130] >   namespace: kube-system
	I0415 19:25:12.721198    2716 command_runner.go:130] >   resourceVersion: "271"
	I0415 19:25:12.721198    2716 command_runner.go:130] >   uid: 8d1ff511-93dc-4477-8bf3-bdcc02b55248
	I0415 19:25:12.723200    2716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0415 19:25:12.820058    2716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 19:25:13.162482    2716 command_runner.go:130] > configmap/coredns replaced
	I0415 19:25:13.162482    2716 start.go:946] {"host.minikube.internal": 172.19.48.1} host record injected into CoreDNS's ConfigMap
	I0415 19:25:13.163882    2716 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 19:25:13.163969    2716 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 19:25:13.164710    2716 kapi.go:59] client config for multinode-841000: &rest.Config{Host:"https://172.19.62.237:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-841000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-841000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f71600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0415 19:25:13.164710    2716 kapi.go:59] client config for multinode-841000: &rest.Config{Host:"https://172.19.62.237:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-841000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-841000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f71600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0415 19:25:13.166414    2716 cert_rotation.go:137] Starting client certificate rotation controller
	I0415 19:25:13.166662    2716 node_ready.go:35] waiting up to 6m0s for node "multinode-841000" to be "Ready" ...
	I0415 19:25:13.166662    2716 round_trippers.go:463] GET https://172.19.62.237:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0415 19:25:13.166662    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:13.166662    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:13.166662    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:13.166662    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:13.166662    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:13.166662    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:13.166662    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:13.184653    2716 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0415 19:25:13.185310    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:13.185310    2716 round_trippers.go:580]     Audit-Id: bb6b9523-4d8a-4957-ae71-f1e090ac09c3
	I0415 19:25:13.185425    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:13.185425    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:13.185467    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:13.185541    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:13.185310    2716 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0415 19:25:13.185541    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:13 GMT
	I0415 19:25:13.185741    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:13.185844    2716 round_trippers.go:580]     Audit-Id: 2ec9d5fc-76f8-40ba-b04a-d698081275a9
	I0415 19:25:13.185902    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:13.185902    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:13.185942    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:13.185942    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:13.185942    2716 round_trippers.go:580]     Content-Length: 291
	I0415 19:25:13.185942    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:13 GMT
	I0415 19:25:13.185942    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:13.185942    2716 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"4df20018-f3d8-466e-bf64-841fb958db45","resourceVersion":"380","creationTimestamp":"2024-04-15T19:24:58Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0415 19:25:13.186903    2716 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"4df20018-f3d8-466e-bf64-841fb958db45","resourceVersion":"380","creationTimestamp":"2024-04-15T19:24:58Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0415 19:25:13.186998    2716 round_trippers.go:463] PUT https://172.19.62.237:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0415 19:25:13.187139    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:13.187168    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:13.187168    2716 round_trippers.go:473]     Content-Type: application/json
	I0415 19:25:13.187168    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:13.213841    2716 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0415 19:25:13.213909    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:13.213909    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:13 GMT
	I0415 19:25:13.213978    2716 round_trippers.go:580]     Audit-Id: f6b375a1-2157-490f-a913-b3265531fe86
	I0415 19:25:13.213978    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:13.213978    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:13.213978    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:13.213978    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:13.214042    2716 round_trippers.go:580]     Content-Length: 291
	I0415 19:25:13.214042    2716 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"4df20018-f3d8-466e-bf64-841fb958db45","resourceVersion":"395","creationTimestamp":"2024-04-15T19:24:58Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0415 19:25:13.676776    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:13.676944    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:13.677025    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:13.677025    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:13.677025    2716 round_trippers.go:463] GET https://172.19.62.237:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0415 19:25:13.677025    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:13.677025    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:13.677025    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:13.682592    2716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 19:25:13.682592    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:13.682592    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:13.682592    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:13.682592    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:13.682592    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:13 GMT
	I0415 19:25:13.682592    2716 round_trippers.go:580]     Audit-Id: 6cc27bdc-c711-4449-b866-bf5b9fafd3d0
	I0415 19:25:13.682592    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:13.683343    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:13.685590    2716 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0415 19:25:13.685590    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:13.685590    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:13 GMT
	I0415 19:25:13.685590    2716 round_trippers.go:580]     Audit-Id: 0b0f90a5-166f-495e-bc27-91cf5df4e81e
	I0415 19:25:13.685590    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:13.685590    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:13.685590    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:13.685590    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:13.685590    2716 round_trippers.go:580]     Content-Length: 291
	I0415 19:25:13.685590    2716 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"4df20018-f3d8-466e-bf64-841fb958db45","resourceVersion":"406","creationTimestamp":"2024-04-15T19:24:58Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0415 19:25:13.685590    2716 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-841000" context rescaled to 1 replicas
	I0415 19:25:14.168541    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:14.168541    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:14.168541    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:14.168541    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:14.172541    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:25:14.173026    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:14.173084    2716 round_trippers.go:580]     Audit-Id: f87ad259-4fcc-4e84-9573-ccfa435000f3
	I0415 19:25:14.173084    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:14.173137    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:14.173137    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:14.173137    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:14.173137    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:14 GMT
	I0415 19:25:14.173137    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:14.674384    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:14.674384    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:14.674529    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:14.674529    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:14.677762    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:14.678495    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:14.678495    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:14 GMT
	I0415 19:25:14.678495    2716 round_trippers.go:580]     Audit-Id: f19a0b63-15f0-446f-87a6-aa46e4e2ab0a
	I0415 19:25:14.678495    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:14.678495    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:14.678495    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:14.678495    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:14.678782    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:14.874203    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:25:14.874203    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:14.876058    2716 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 19:25:14.876434    2716 kapi.go:59] client config for multinode-841000: &rest.Config{Host:"https://172.19.62.237:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-841000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-841000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f71600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0415 19:25:14.877232    2716 addons.go:234] Setting addon default-storageclass=true in "multinode-841000"
	I0415 19:25:14.878077    2716 host.go:66] Checking if "multinode-841000" exists ...
	I0415 19:25:14.879209    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:25:14.883387    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:25:14.883387    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:14.886293    2716 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 19:25:14.889482    2716 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 19:25:14.889482    2716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0415 19:25:14.889482    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:25:15.180652    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:15.180652    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:15.180652    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:15.180652    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:15.186711    2716 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 19:25:15.187243    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:15.187243    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:15.187243    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:15.187243    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:15.187243    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:15 GMT
	I0415 19:25:15.187243    2716 round_trippers.go:580]     Audit-Id: 56c251e5-ac69-44d3-bc08-8fec6a99784f
	I0415 19:25:15.187243    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:15.187860    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:15.188761    2716 node_ready.go:53] node "multinode-841000" has status "Ready":"False"
	I0415 19:25:15.672694    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:15.672822    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:15.672822    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:15.672822    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:15.687150    2716 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0415 19:25:15.687150    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:15.687242    2716 round_trippers.go:580]     Audit-Id: 064c75b2-c449-479c-bc0f-bf148099d815
	I0415 19:25:15.687242    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:15.687242    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:15.687242    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:15.687242    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:15.687242    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:15 GMT
	I0415 19:25:15.687313    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:16.182019    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:16.182078    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:16.182186    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:16.182186    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:16.185527    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:16.186356    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:16.186356    2716 round_trippers.go:580]     Audit-Id: dcaf4b48-25a5-4d75-b7e4-51a61a6cd5fb
	I0415 19:25:16.186356    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:16.186422    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:16.186422    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:16.186422    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:16.186422    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:16 GMT
	I0415 19:25:16.186422    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:16.674058    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:16.674119    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:16.674119    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:16.674119    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:16.677551    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:16.678518    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:16.678518    2716 round_trippers.go:580]     Audit-Id: edbd2427-b881-4d80-8f4e-f7dfbba489dd
	I0415 19:25:16.678518    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:16.678518    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:16.678570    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:16.678570    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:16.678570    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:16 GMT
	I0415 19:25:16.678917    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:17.166733    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:17.166733    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:17.166733    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:17.166733    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:17.171392    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:25:17.171454    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:17.171454    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:17.171454    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:17.171454    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:17.171454    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:17 GMT
	I0415 19:25:17.171454    2716 round_trippers.go:580]     Audit-Id: 7be2eb4f-947d-45ea-90e3-05a60ca446bc
	I0415 19:25:17.171454    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:17.172026    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:17.368048    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:25:17.368048    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:17.368918    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:25:17.490902    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:25:17.490902    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:17.491910    2716 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0415 19:25:17.491910    2716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0415 19:25:17.491910    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:25:17.673369    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:17.673441    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:17.673441    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:17.673441    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:17.678006    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:25:17.678006    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:17.678006    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:17.678091    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:17 GMT
	I0415 19:25:17.678091    2716 round_trippers.go:580]     Audit-Id: 766366c7-dddc-4a76-9d73-ce49b7182b44
	I0415 19:25:17.678091    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:17.678091    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:17.678091    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:17.678487    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:17.679126    2716 node_ready.go:53] node "multinode-841000" has status "Ready":"False"
	I0415 19:25:18.182494    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:18.182902    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:18.182970    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:18.182970    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:18.188090    2716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 19:25:18.188090    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:18.188090    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:18.188347    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:18.188347    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:18.188347    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:18.188347    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:18 GMT
	I0415 19:25:18.188347    2716 round_trippers.go:580]     Audit-Id: e79db893-4e5c-4ee0-8f7c-f91ff1256163
	I0415 19:25:18.188686    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:18.676365    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:18.676365    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:18.676479    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:18.676479    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:18.680782    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:25:18.680782    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:18.680782    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:18.680782    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:18.680782    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:18 GMT
	I0415 19:25:18.680782    2716 round_trippers.go:580]     Audit-Id: 173d487f-e842-460b-8b59-eeec3bb328d3
	I0415 19:25:18.680782    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:18.680782    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:18.681603    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:19.168847    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:19.168997    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:19.168997    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:19.168997    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:19.510246    2716 round_trippers.go:574] Response Status: 200 OK in 340 milliseconds
	I0415 19:25:19.510246    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:19.510246    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:19.510246    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:19.510338    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:19 GMT
	I0415 19:25:19.510338    2716 round_trippers.go:580]     Audit-Id: a6fd8c22-d458-4d90-a9c2-6b2048fd4e38
	I0415 19:25:19.510338    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:19.510338    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:19.510614    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:19.675169    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:19.675169    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:19.675169    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:19.675169    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:19.686796    2716 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0415 19:25:19.687109    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:19.687109    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:19.687109    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:19.687109    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:19 GMT
	I0415 19:25:19.687109    2716 round_trippers.go:580]     Audit-Id: 55a37516-25f9-4537-9307-441fa2b596ab
	I0415 19:25:19.687176    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:19.687176    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:19.688539    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:19.689314    2716 node_ready.go:53] node "multinode-841000" has status "Ready":"False"
	I0415 19:25:19.875679    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:25:19.875679    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:19.876640    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:25:20.167206    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:20.167206    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:20.167206    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:20.167206    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:20.172311    2716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 19:25:20.172311    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:20.172456    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:20.172456    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:20.172456    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:20.172456    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:20.172456    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:20 GMT
	I0415 19:25:20.172456    2716 round_trippers.go:580]     Audit-Id: 054b8ae2-9991-4db7-a332-5112cb975549
	I0415 19:25:20.172767    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:20.214070    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:25:20.214401    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:20.214591    2716 sshutil.go:53] new ssh client: &{IP:172.19.62.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000\id_rsa Username:docker}
	I0415 19:25:20.362464    2716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 19:25:20.673468    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:20.673468    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:20.673468    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:20.673468    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:20.677209    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:20.677209    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:20.677209    2716 round_trippers.go:580]     Audit-Id: cf447d99-e5a8-4a4f-bd48-fea652a8a62e
	I0415 19:25:20.677209    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:20.677209    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:20.677209    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:20.677209    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:20.677209    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:20 GMT
	I0415 19:25:20.677209    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:21.036390    2716 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0415 19:25:21.036390    2716 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0415 19:25:21.036390    2716 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0415 19:25:21.036390    2716 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0415 19:25:21.036390    2716 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0415 19:25:21.036390    2716 command_runner.go:130] > pod/storage-provisioner created
	I0415 19:25:21.178785    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:21.178785    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:21.178785    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:21.178785    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:21.183352    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:25:21.183352    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:21.183352    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:21.183352    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:21 GMT
	I0415 19:25:21.183352    2716 round_trippers.go:580]     Audit-Id: 35c3a732-5b63-4b42-8e84-2ced1a30fea9
	I0415 19:25:21.184358    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:21.184358    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:21.184358    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:21.184507    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:21.669812    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:21.669812    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:21.669812    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:21.669812    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:21.674922    2716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 19:25:21.675198    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:21.675198    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:21.675198    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:21.675198    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:21.675198    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:21 GMT
	I0415 19:25:21.675198    2716 round_trippers.go:580]     Audit-Id: 398eb0fc-7c8d-48e5-8f24-3c88f3a1b09e
	I0415 19:25:21.675198    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:21.675573    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:22.176683    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:22.176794    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:22.176794    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:22.176794    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:22.181422    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:25:22.181422    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:22.181422    2716 round_trippers.go:580]     Audit-Id: 1ed97db8-c500-44dc-ab1a-e9ee18ff1e26
	I0415 19:25:22.181422    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:22.181422    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:22.181422    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:22.181422    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:22.181422    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:22 GMT
	I0415 19:25:22.181804    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:22.182335    2716 node_ready.go:53] node "multinode-841000" has status "Ready":"False"
	I0415 19:25:22.668875    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:22.668875    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:22.668875    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:22.668875    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:22.672480    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:22.672862    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:22.672862    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:22 GMT
	I0415 19:25:22.672862    2716 round_trippers.go:580]     Audit-Id: e56d5f28-d578-45d1-944c-d994261863f7
	I0415 19:25:22.672862    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:22.672862    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:22.672862    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:22.672862    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:22.673236    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:22.675816    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:25:22.675816    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:22.676514    2716 sshutil.go:53] new ssh client: &{IP:172.19.62.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000\id_rsa Username:docker}
	I0415 19:25:22.815934    2716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0415 19:25:22.983976    2716 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0415 19:25:22.985170    2716 round_trippers.go:463] GET https://172.19.62.237:8443/apis/storage.k8s.io/v1/storageclasses
	I0415 19:25:22.985170    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:22.985170    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:22.985170    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:22.988567    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:22.989277    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:22.989277    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:22.989340    2716 round_trippers.go:580]     Content-Length: 1273
	I0415 19:25:22.989340    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:22 GMT
	I0415 19:25:22.989340    2716 round_trippers.go:580]     Audit-Id: 1027dbcc-df4b-471b-bb6f-f54038aaba64
	I0415 19:25:22.989340    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:22.989340    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:22.989402    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:22.989402    2716 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"435"},"items":[{"metadata":{"name":"standard","uid":"522fae16-007d-46c3-bc39-f9b62496ebdd","resourceVersion":"435","creationTimestamp":"2024-04-15T19:25:22Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-15T19:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0415 19:25:22.990347    2716 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"522fae16-007d-46c3-bc39-f9b62496ebdd","resourceVersion":"435","creationTimestamp":"2024-04-15T19:25:22Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-15T19:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0415 19:25:22.990437    2716 round_trippers.go:463] PUT https://172.19.62.237:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0415 19:25:22.990437    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:22.990437    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:22.990437    2716 round_trippers.go:473]     Content-Type: application/json
	I0415 19:25:22.990437    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:22.994755    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:25:22.994755    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:22.994755    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:22.994755    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:22.994755    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:22.994755    2716 round_trippers.go:580]     Content-Length: 1220
	I0415 19:25:22.994755    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:22 GMT
	I0415 19:25:22.994755    2716 round_trippers.go:580]     Audit-Id: a7c3d187-311a-4ac4-b672-dee1f28d959e
	I0415 19:25:22.994755    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:22.994755    2716 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"522fae16-007d-46c3-bc39-f9b62496ebdd","resourceVersion":"435","creationTimestamp":"2024-04-15T19:25:22Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-15T19:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0415 19:25:22.998220    2716 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0415 19:25:23.001505    2716 addons.go:505] duration metric: took 10.5662789s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0415 19:25:23.173027    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:23.173027    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:23.173027    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:23.173027    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:23.176385    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:23.176385    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:23.176385    2716 round_trippers.go:580]     Audit-Id: 7c0cd735-491e-439e-8745-871492a2f428
	I0415 19:25:23.176385    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:23.177293    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:23.177293    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:23.177293    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:23.177293    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:23 GMT
	I0415 19:25:23.177629    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:23.675226    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:23.675226    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:23.675226    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:23.675226    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:23.679838    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:25:23.679838    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:23.679838    2716 round_trippers.go:580]     Audit-Id: 7c238fa8-e693-4d45-8b35-c6f036bb11d0
	I0415 19:25:23.679838    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:23.679838    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:23.679838    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:23.679838    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:23.679838    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:23 GMT
	I0415 19:25:23.680395    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:24.174936    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:24.174936    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:24.174936    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:24.174936    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:24.178355    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:24.178355    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:24.178355    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:24.178355    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:24.178355    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:24 GMT
	I0415 19:25:24.178355    2716 round_trippers.go:580]     Audit-Id: 84c024ca-b1b9-4233-9284-14a056994490
	I0415 19:25:24.179369    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:24.179369    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:24.179468    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:24.674426    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:24.674489    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:24.674539    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:24.674539    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:24.680860    2716 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 19:25:24.680860    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:24.680860    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:24.680860    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:24.680860    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:24.680860    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:24.680860    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:24 GMT
	I0415 19:25:24.680860    2716 round_trippers.go:580]     Audit-Id: 3cb256e8-706c-40f4-b254-a31c63b8dd98
	I0415 19:25:24.681524    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:24.681613    2716 node_ready.go:53] node "multinode-841000" has status "Ready":"False"
	I0415 19:25:25.174606    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:25.174606    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:25.174606    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:25.174724    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:25.179163    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:25:25.179163    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:25.179163    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:25.179163    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:25.179163    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:25.179163    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:25.179163    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:25 GMT
	I0415 19:25:25.179163    2716 round_trippers.go:580]     Audit-Id: 773dab00-294d-49f0-8aba-ecdbe5221693
	I0415 19:25:25.180020    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:25.673992    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:25.674114    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:25.674114    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:25.674114    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:25.681425    2716 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0415 19:25:25.681425    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:25.681425    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:25.681425    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:25.681425    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:25.681425    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:25.681425    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:25 GMT
	I0415 19:25:25.681425    2716 round_trippers.go:580]     Audit-Id: 8dddb1ba-9c41-460b-a970-b1b8edc52163
	I0415 19:25:25.681978    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:26.175179    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:26.175290    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:26.175290    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:26.175290    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:26.180548    2716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 19:25:26.180715    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:26.180760    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:26.180760    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:26.180821    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:26 GMT
	I0415 19:25:26.180885    2716 round_trippers.go:580]     Audit-Id: 28ac2693-229d-46a1-97d2-f6ad22178a7a
	I0415 19:25:26.180910    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:26.180910    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:26.181189    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:26.674402    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:26.674402    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:26.674402    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:26.674402    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:26.681589    2716 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 19:25:26.681633    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:26.681633    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:26 GMT
	I0415 19:25:26.681633    2716 round_trippers.go:580]     Audit-Id: d6dfdc52-f909-49ad-a92d-1687a20beb38
	I0415 19:25:26.681633    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:26.681701    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:26.681701    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:26.681701    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:26.681971    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:26.682512    2716 node_ready.go:53] node "multinode-841000" has status "Ready":"False"
	I0415 19:25:27.179466    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:27.179466    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:27.179466    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:27.179466    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:27.185213    2716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 19:25:27.185818    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:27.185818    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:27 GMT
	I0415 19:25:27.185818    2716 round_trippers.go:580]     Audit-Id: 7f1378cf-dd16-4dc3-825d-57c590da8e1f
	I0415 19:25:27.185818    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:27.185886    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:27.185886    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:27.185886    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:27.186028    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"439","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4790 chars]
	I0415 19:25:27.186734    2716 node_ready.go:49] node "multinode-841000" has status "Ready":"True"
	I0415 19:25:27.186762    2716 node_ready.go:38] duration metric: took 14.0199861s for node "multinode-841000" to be "Ready" ...
	I0415 19:25:27.186762    2716 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 19:25:27.186762    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods
	I0415 19:25:27.186762    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:27.186762    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:27.186762    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:27.198256    2716 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0415 19:25:27.198256    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:27.198256    2716 round_trippers.go:580]     Audit-Id: fa7eaaad-892e-49e6-b002-1dd49bffdb44
	I0415 19:25:27.198256    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:27.198256    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:27.198256    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:27.199249    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:27.199249    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:27 GMT
	I0415 19:25:27.200171    2716 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"coredns-76f75df574-vqqtx","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"5cce6545-fec3-4334-9041-de82b0e42801","resourceVersion":"443","creationTimestamp":"2024-04-15T19:25:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"83780525-0642-4265-aa15-7ef8ee4dcb17","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:25:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83780525-0642-4265-aa15-7ef8ee4dcb17\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56336 chars]
	I0415 19:25:27.206176    2716 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-vqqtx" in "kube-system" namespace to be "Ready" ...
	I0415 19:25:27.206176    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-vqqtx
	I0415 19:25:27.206176    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:27.206176    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:27.206176    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:27.211169    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:25:27.211169    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:27.211251    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:27.211251    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:27.211251    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:27 GMT
	I0415 19:25:27.211251    2716 round_trippers.go:580]     Audit-Id: fef8e4f7-0d1c-4e7b-91ca-7689225a0965
	I0415 19:25:27.211251    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:27.211251    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:27.211445    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-vqqtx","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"5cce6545-fec3-4334-9041-de82b0e42801","resourceVersion":"443","creationTimestamp":"2024-04-15T19:25:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"83780525-0642-4265-aa15-7ef8ee4dcb17","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:25:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83780525-0642-4265-aa15-7ef8ee4dcb17\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0415 19:25:27.212175    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:27.212175    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:27.212175    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:27.212175    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:27.215583    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:27.216131    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:27.216131    2716 round_trippers.go:580]     Audit-Id: 9a558301-9bb6-47c1-8c02-f972abeb6bb7
	I0415 19:25:27.216131    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:27.216131    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:27.216131    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:27.216131    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:27.216131    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:27 GMT
	I0415 19:25:27.216611    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"439","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4790 chars]
	I0415 19:25:27.720607    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-vqqtx
	I0415 19:25:27.720607    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:27.720607    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:27.720607    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:27.724209    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:27.724209    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:27.724209    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:27.724209    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:27 GMT
	I0415 19:25:27.724209    2716 round_trippers.go:580]     Audit-Id: 5caf761a-4b20-4cae-a0f1-c5d8ce528a58
	I0415 19:25:27.724209    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:27.724209    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:27.724209    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:27.725453    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-vqqtx","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"5cce6545-fec3-4334-9041-de82b0e42801","resourceVersion":"443","creationTimestamp":"2024-04-15T19:25:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"83780525-0642-4265-aa15-7ef8ee4dcb17","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:25:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83780525-0642-4265-aa15-7ef8ee4dcb17\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0415 19:25:27.726505    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:27.726505    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:27.726505    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:27.726505    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:27.730085    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:27.730085    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:27.730085    2716 round_trippers.go:580]     Audit-Id: 856e4f75-5138-4018-a158-bdcc9a9f1fc1
	I0415 19:25:27.730085    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:27.730085    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:27.730085    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:27.730085    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:27.730085    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:27 GMT
	I0415 19:25:27.731111    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"439","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4790 chars]
	I0415 19:25:28.210989    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-vqqtx
	I0415 19:25:28.210989    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:28.210989    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:28.210989    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:28.215609    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:25:28.215609    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:28.215609    2716 round_trippers.go:580]     Audit-Id: 3769fd09-bbb9-49af-8504-af4ec44b2089
	I0415 19:25:28.215609    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:28.215609    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:28.215609    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:28.215609    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:28.215908    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:28 GMT
	I0415 19:25:28.216620    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-vqqtx","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"5cce6545-fec3-4334-9041-de82b0e42801","resourceVersion":"454","creationTimestamp":"2024-04-15T19:25:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"83780525-0642-4265-aa15-7ef8ee4dcb17","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:25:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83780525-0642-4265-aa15-7ef8ee4dcb17\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6807 chars]
	I0415 19:25:28.217935    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:28.217935    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:28.218011    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:28.218011    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:28.220205    2716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 19:25:28.220205    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:28.220205    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:28.220205    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:28.220205    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:28.221274    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:28.221274    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:28 GMT
	I0415 19:25:28.221298    2716 round_trippers.go:580]     Audit-Id: 63cd24d9-2196-4df5-ad2a-9e45561631f3
	I0415 19:25:28.222508    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"439","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4790 chars]
	I0415 19:25:28.709061    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-vqqtx
	I0415 19:25:28.709061    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:28.709235    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:28.709235    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:28.713959    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:25:28.714472    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:28.714472    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:28.714472    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:28 GMT
	I0415 19:25:28.714472    2716 round_trippers.go:580]     Audit-Id: 8abccf35-d146-4b42-9be7-d3966cf6292f
	I0415 19:25:28.714472    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:28.714472    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:28.714599    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:28.714816    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-vqqtx","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"5cce6545-fec3-4334-9041-de82b0e42801","resourceVersion":"454","creationTimestamp":"2024-04-15T19:25:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"83780525-0642-4265-aa15-7ef8ee4dcb17","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:25:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83780525-0642-4265-aa15-7ef8ee4dcb17\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6807 chars]
	I0415 19:25:28.715626    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:28.715626    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:28.715719    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:28.715719    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:28.717995    2716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 19:25:28.719009    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:28.719070    2716 round_trippers.go:580]     Audit-Id: 03720550-17d7-49b4-809e-5f1d8b43483a
	I0415 19:25:28.719070    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:28.719070    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:28.719070    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:28.719070    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:28.719070    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:28 GMT
	I0415 19:25:28.719369    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"439","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4790 chars]
	I0415 19:25:29.210757    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-vqqtx
	I0415 19:25:29.210757    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:29.210757    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:29.210757    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:29.215816    2716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 19:25:29.215816    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:29.215816    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:29.215816    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:29.215816    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:29 GMT
	I0415 19:25:29.215816    2716 round_trippers.go:580]     Audit-Id: a83047c7-123d-4541-ae75-138589f8941e
	I0415 19:25:29.215816    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:29.215816    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:29.215816    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-vqqtx","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"5cce6545-fec3-4334-9041-de82b0e42801","resourceVersion":"460","creationTimestamp":"2024-04-15T19:25:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"83780525-0642-4265-aa15-7ef8ee4dcb17","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:25:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83780525-0642-4265-aa15-7ef8ee4dcb17\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0415 19:25:29.216985    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:29.216985    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:29.216985    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:29.216985    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:29.220580    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:29.220580    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:29.220580    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:29.220580    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:29.220580    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:29.220580    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:29 GMT
	I0415 19:25:29.220580    2716 round_trippers.go:580]     Audit-Id: ebc96133-96f3-47a0-8176-757cba31fe63
	I0415 19:25:29.220580    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:29.221389    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"439","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4790 chars]
	I0415 19:25:29.221955    2716 pod_ready.go:92] pod "coredns-76f75df574-vqqtx" in "kube-system" namespace has status "Ready":"True"
	I0415 19:25:29.221955    2716 pod_ready.go:81] duration metric: took 2.015763s for pod "coredns-76f75df574-vqqtx" in "kube-system" namespace to be "Ready" ...
	I0415 19:25:29.221955    2716 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-841000" in "kube-system" namespace to be "Ready" ...
	I0415 19:25:29.221955    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-841000
	I0415 19:25:29.221955    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:29.221955    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:29.221955    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:29.224783    2716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 19:25:29.224783    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:29.225773    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:29.225773    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:29.225805    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:29.225805    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:29.225805    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:29 GMT
	I0415 19:25:29.225805    2716 round_trippers.go:580]     Audit-Id: 877305b6-4f03-4500-899a-ec1ce64b2a0a
	I0415 19:25:29.226204    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-841000","namespace":"kube-system","uid":"ec0b243b-fd9f-4081-82dc-532086096935","resourceVersion":"420","creationTimestamp":"2024-04-15T19:24:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.62.237:2379","kubernetes.io/config.hash":"e14f778ba3e14a3effd052cdd14002ca","kubernetes.io/config.mirror":"e14f778ba3e14a3effd052cdd14002ca","kubernetes.io/config.seen":"2024-04-15T19:24:49.499002669Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:24:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0415 19:25:29.226859    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:29.226933    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:29.226987    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:29.226987    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:29.228723    2716 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0415 19:25:29.228723    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:29.228723    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:29.228723    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:29.228723    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:29.228723    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:29.228723    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:29 GMT
	I0415 19:25:29.228723    2716 round_trippers.go:580]     Audit-Id: 1a966896-95d1-4476-9966-f1761bd36cd5
	I0415 19:25:29.230050    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"439","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4790 chars]
	I0415 19:25:29.230108    2716 pod_ready.go:92] pod "etcd-multinode-841000" in "kube-system" namespace has status "Ready":"True"
	I0415 19:25:29.230108    2716 pod_ready.go:81] duration metric: took 8.1526ms for pod "etcd-multinode-841000" in "kube-system" namespace to be "Ready" ...
	I0415 19:25:29.230108    2716 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-841000" in "kube-system" namespace to be "Ready" ...
	I0415 19:25:29.230108    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-841000
	I0415 19:25:29.230108    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:29.230108    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:29.230643    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:29.233100    2716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 19:25:29.233100    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:29.233100    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:29 GMT
	I0415 19:25:29.233100    2716 round_trippers.go:580]     Audit-Id: 588ba998-aa85-4d45-9ad8-3e997534c7d9
	I0415 19:25:29.233100    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:29.233498    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:29.233498    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:29.233498    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:29.233793    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-841000","namespace":"kube-system","uid":"092f3aee-b99d-4e46-b42d-ae1b3e2f6c8b","resourceVersion":"419","creationTimestamp":"2024-04-15T19:24:59Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.62.237:8443","kubernetes.io/config.hash":"c06ba545f7155478447169e98c788e3f","kubernetes.io/config.mirror":"c06ba545f7155478447169e98c788e3f","kubernetes.io/config.seen":"2024-04-15T19:24:59.013465769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:24:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0415 19:25:29.234236    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:29.234236    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:29.234236    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:29.234236    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:29.239265    2716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 19:25:29.239404    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:29.239404    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:29.239404    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:29.239404    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:29.239404    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:29 GMT
	I0415 19:25:29.239404    2716 round_trippers.go:580]     Audit-Id: 4008e473-e369-46c2-987d-535707016b4f
	I0415 19:25:29.239404    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:29.239656    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"439","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4790 chars]
	I0415 19:25:29.240493    2716 pod_ready.go:92] pod "kube-apiserver-multinode-841000" in "kube-system" namespace has status "Ready":"True"
	I0415 19:25:29.240493    2716 pod_ready.go:81] duration metric: took 10.3852ms for pod "kube-apiserver-multinode-841000" in "kube-system" namespace to be "Ready" ...
	I0415 19:25:29.240493    2716 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-841000" in "kube-system" namespace to be "Ready" ...
	I0415 19:25:29.240493    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-841000
	I0415 19:25:29.240493    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:29.240493    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:29.240493    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:29.244136    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:29.244294    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:29.244294    2716 round_trippers.go:580]     Audit-Id: da3730a9-8a2e-4990-bf42-f03d354d6f3f
	I0415 19:25:29.244294    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:29.244357    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:29.244357    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:29.244357    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:29.244357    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:29 GMT
	I0415 19:25:29.245148    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-841000","namespace":"kube-system","uid":"8922765c-684e-491a-83a0-e06cec665bbd","resourceVersion":"417","creationTimestamp":"2024-04-15T19:24:59Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9d43b7787e40d9d062807a067e1e26cc","kubernetes.io/config.mirror":"9d43b7787e40d9d062807a067e1e26cc","kubernetes.io/config.seen":"2024-04-15T19:24:59.013467070Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:24:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0415 19:25:29.245148    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:29.245148    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:29.245148    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:29.245148    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:29.248301    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:29.248301    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:29.248301    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:29.248301    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:29 GMT
	I0415 19:25:29.248301    2716 round_trippers.go:580]     Audit-Id: 6d6be13f-1c81-44ed-a5a8-0aea1b6a2020
	I0415 19:25:29.248301    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:29.248301    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:29.248301    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:29.248301    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"439","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4790 chars]
	I0415 19:25:29.248301    2716 pod_ready.go:92] pod "kube-controller-manager-multinode-841000" in "kube-system" namespace has status "Ready":"True"
	I0415 19:25:29.248301    2716 pod_ready.go:81] duration metric: took 7.8084ms for pod "kube-controller-manager-multinode-841000" in "kube-system" namespace to be "Ready" ...
	I0415 19:25:29.248301    2716 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7v79z" in "kube-system" namespace to be "Ready" ...
	I0415 19:25:29.248301    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7v79z
	I0415 19:25:29.248301    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:29.248301    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:29.248301    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:29.251774    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:29.251774    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:29.251774    2716 round_trippers.go:580]     Audit-Id: 89894b9e-8b20-488c-9b09-015d5270a899
	I0415 19:25:29.251774    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:29.251774    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:29.251774    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:29.251774    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:29.251774    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:29 GMT
	I0415 19:25:29.251774    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7v79z","generateName":"kube-proxy-","namespace":"kube-system","uid":"0a08abf8-9fa3-4fab-86cc-1b709bc0d263","resourceVersion":"414","creationTimestamp":"2024-04-15T19:25:12Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c07d15d0-ec90-403c-8aa0-1c81c17e9eec","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:25:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c07d15d0-ec90-403c-8aa0-1c81c17e9eec\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0415 19:25:29.253589    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:29.253655    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:29.253655    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:29.253655    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:29.256237    2716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 19:25:29.256727    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:29.256727    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:29.256727    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:29.256727    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:29.256727    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:29.256727    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:29 GMT
	I0415 19:25:29.256844    2716 round_trippers.go:580]     Audit-Id: cdffaddd-d3d0-4aac-b6b2-4192ca31bf0d
	I0415 19:25:29.257183    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"439","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4790 chars]
	I0415 19:25:29.257584    2716 pod_ready.go:92] pod "kube-proxy-7v79z" in "kube-system" namespace has status "Ready":"True"
	I0415 19:25:29.257648    2716 pod_ready.go:81] duration metric: took 9.347ms for pod "kube-proxy-7v79z" in "kube-system" namespace to be "Ready" ...
	I0415 19:25:29.257704    2716 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-841000" in "kube-system" namespace to be "Ready" ...
	I0415 19:25:29.414353    2716 request.go:629] Waited for 156.5895ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-841000
	I0415 19:25:29.414353    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-841000
	I0415 19:25:29.414353    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:29.414353    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:29.414353    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:29.417953    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:29.418983    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:29.418983    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:29.418983    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:29.419016    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:29.419016    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:29 GMT
	I0415 19:25:29.419016    2716 round_trippers.go:580]     Audit-Id: 5589bc83-da3e-4372-b8c8-e5dd13256b78
	I0415 19:25:29.419016    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:29.419215    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-841000","namespace":"kube-system","uid":"67374ab1-2ea0-4b43-82b8-1b666d274f2f","resourceVersion":"418","creationTimestamp":"2024-04-15T19:24:59Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4a04a4641e7860cc5b6e00042829e3c0","kubernetes.io/config.mirror":"4a04a4641e7860cc5b6e00042829e3c0","kubernetes.io/config.seen":"2024-04-15T19:24:59.013468170Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:24:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0415 19:25:29.619907    2716 request.go:629] Waited for 199.7705ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:29.619907    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:29.619907    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:29.619907    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:29.619907    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:29.623511    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:29.623511    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:29.623511    2716 round_trippers.go:580]     Audit-Id: fcd480a9-bd58-4a01-8cdd-a19aeda905ac
	I0415 19:25:29.623511    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:29.623511    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:29.623511    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:29.623511    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:29.623944    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:29 GMT
	I0415 19:25:29.624800    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"439","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4790 chars]
	I0415 19:25:29.625321    2716 pod_ready.go:92] pod "kube-scheduler-multinode-841000" in "kube-system" namespace has status "Ready":"True"
	I0415 19:25:29.625409    2716 pod_ready.go:81] duration metric: took 367.6145ms for pod "kube-scheduler-multinode-841000" in "kube-system" namespace to be "Ready" ...
	I0415 19:25:29.625409    2716 pod_ready.go:38] duration metric: took 2.4386278s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 19:25:29.625498    2716 api_server.go:52] waiting for apiserver process to appear ...
	I0415 19:25:29.640280    2716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 19:25:29.668788    2716 command_runner.go:130] > 2019
	I0415 19:25:29.669807    2716 api_server.go:72] duration metric: took 17.2348031s to wait for apiserver process to appear ...
	I0415 19:25:29.669890    2716 api_server.go:88] waiting for apiserver healthz status ...
	I0415 19:25:29.669962    2716 api_server.go:253] Checking apiserver healthz at https://172.19.62.237:8443/healthz ...
	I0415 19:25:29.676471    2716 api_server.go:279] https://172.19.62.237:8443/healthz returned 200:
	ok
	I0415 19:25:29.677272    2716 round_trippers.go:463] GET https://172.19.62.237:8443/version
	I0415 19:25:29.677272    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:29.677272    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:29.677272    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:29.678840    2716 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0415 19:25:29.679442    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:29.679442    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:29.679442    2716 round_trippers.go:580]     Content-Length: 263
	I0415 19:25:29.679508    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:29 GMT
	I0415 19:25:29.679508    2716 round_trippers.go:580]     Audit-Id: 8fb8f1c3-a19b-4ec4-84b8-6c2e25aaf9ed
	I0415 19:25:29.679583    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:29.679583    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:29.679583    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:29.679583    2716 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "29",
	  "gitVersion": "v1.29.3",
	  "gitCommit": "6813625b7cd706db5bc7388921be03071e1a492d",
	  "gitTreeState": "clean",
	  "buildDate": "2024-03-14T23:58:36Z",
	  "goVersion": "go1.21.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0415 19:25:29.679715    2716 api_server.go:141] control plane version: v1.29.3
	I0415 19:25:29.679715    2716 api_server.go:131] duration metric: took 9.8248ms to wait for apiserver health ...
	I0415 19:25:29.679715    2716 system_pods.go:43] waiting for kube-system pods to appear ...
	I0415 19:25:29.821517    2716 request.go:629] Waited for 141.5559ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods
	I0415 19:25:29.821517    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods
	I0415 19:25:29.821517    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:29.821517    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:29.821517    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:29.827135    2716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 19:25:29.827135    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:29.827135    2716 round_trippers.go:580]     Audit-Id: af0f0293-ed9b-42ec-9630-d0cc0ac3eb59
	I0415 19:25:29.827639    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:29.827639    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:29.827639    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:29.827639    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:29.827639    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:29 GMT
	I0415 19:25:29.831888    2716 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"464"},"items":[{"metadata":{"name":"coredns-76f75df574-vqqtx","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"5cce6545-fec3-4334-9041-de82b0e42801","resourceVersion":"460","creationTimestamp":"2024-04-15T19:25:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"83780525-0642-4265-aa15-7ef8ee4dcb17","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:25:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83780525-0642-4265-aa15-7ef8ee4dcb17\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56450 chars]
	I0415 19:25:29.835249    2716 system_pods.go:59] 8 kube-system pods found
	I0415 19:25:29.835249    2716 system_pods.go:61] "coredns-76f75df574-vqqtx" [5cce6545-fec3-4334-9041-de82b0e42801] Running
	I0415 19:25:29.835249    2716 system_pods.go:61] "etcd-multinode-841000" [ec0b243b-fd9f-4081-82dc-532086096935] Running
	I0415 19:25:29.835249    2716 system_pods.go:61] "kindnet-zrzd6" [53c9b26b-4969-46c3-ba6e-f831423010a8] Running
	I0415 19:25:29.835249    2716 system_pods.go:61] "kube-apiserver-multinode-841000" [092f3aee-b99d-4e46-b42d-ae1b3e2f6c8b] Running
	I0415 19:25:29.835249    2716 system_pods.go:61] "kube-controller-manager-multinode-841000" [8922765c-684e-491a-83a0-e06cec665bbd] Running
	I0415 19:25:29.835249    2716 system_pods.go:61] "kube-proxy-7v79z" [0a08abf8-9fa3-4fab-86cc-1b709bc0d263] Running
	I0415 19:25:29.835249    2716 system_pods.go:61] "kube-scheduler-multinode-841000" [67374ab1-2ea0-4b43-82b8-1b666d274f2f] Running
	I0415 19:25:29.835249    2716 system_pods.go:61] "storage-provisioner" [d93f9b0a-834d-4028-ae0d-5e1287ef5b9e] Running
	I0415 19:25:29.835249    2716 system_pods.go:74] duration metric: took 155.5324ms to wait for pod list to return data ...
	I0415 19:25:29.835249    2716 default_sa.go:34] waiting for default service account to be created ...
	I0415 19:25:30.023615    2716 request.go:629] Waited for 188.1719ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.237:8443/api/v1/namespaces/default/serviceaccounts
	I0415 19:25:30.023615    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/default/serviceaccounts
	I0415 19:25:30.023615    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:30.023615    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:30.023862    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:30.027264    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:30.028024    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:30.028024    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:30 GMT
	I0415 19:25:30.028024    2716 round_trippers.go:580]     Audit-Id: ec077c93-db7e-40cc-8490-ed09389a771b
	I0415 19:25:30.028024    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:30.028024    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:30.028024    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:30.028024    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:30.028024    2716 round_trippers.go:580]     Content-Length: 261
	I0415 19:25:30.028024    2716 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"465"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"d2cffbc1-13e4-4afc-b8e1-a84c6688a045","resourceVersion":"336","creationTimestamp":"2024-04-15T19:25:12Z"}}]}
	I0415 19:25:30.028024    2716 default_sa.go:45] found service account: "default"
	I0415 19:25:30.028024    2716 default_sa.go:55] duration metric: took 192.7738ms for default service account to be created ...
	I0415 19:25:30.028024    2716 system_pods.go:116] waiting for k8s-apps to be running ...
	I0415 19:25:30.224638    2716 request.go:629] Waited for 195.8866ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods
	I0415 19:25:30.224638    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods
	I0415 19:25:30.224638    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:30.224638    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:30.224638    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:30.241503    2716 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0415 19:25:30.241503    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:30.241503    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:30.241503    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:30.241503    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:30.241503    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:30.241503    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:30 GMT
	I0415 19:25:30.241503    2716 round_trippers.go:580]     Audit-Id: 475cb4a0-1a82-4504-b709-17a6153ac252
	I0415 19:25:30.243203    2716 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"467"},"items":[{"metadata":{"name":"coredns-76f75df574-vqqtx","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"5cce6545-fec3-4334-9041-de82b0e42801","resourceVersion":"460","creationTimestamp":"2024-04-15T19:25:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"83780525-0642-4265-aa15-7ef8ee4dcb17","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:25:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83780525-0642-4265-aa15-7ef8ee4dcb17\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56450 chars]
	I0415 19:25:30.246262    2716 system_pods.go:86] 8 kube-system pods found
	I0415 19:25:30.246411    2716 system_pods.go:89] "coredns-76f75df574-vqqtx" [5cce6545-fec3-4334-9041-de82b0e42801] Running
	I0415 19:25:30.246411    2716 system_pods.go:89] "etcd-multinode-841000" [ec0b243b-fd9f-4081-82dc-532086096935] Running
	I0415 19:25:30.246411    2716 system_pods.go:89] "kindnet-zrzd6" [53c9b26b-4969-46c3-ba6e-f831423010a8] Running
	I0415 19:25:30.246529    2716 system_pods.go:89] "kube-apiserver-multinode-841000" [092f3aee-b99d-4e46-b42d-ae1b3e2f6c8b] Running
	I0415 19:25:30.246529    2716 system_pods.go:89] "kube-controller-manager-multinode-841000" [8922765c-684e-491a-83a0-e06cec665bbd] Running
	I0415 19:25:30.246529    2716 system_pods.go:89] "kube-proxy-7v79z" [0a08abf8-9fa3-4fab-86cc-1b709bc0d263] Running
	I0415 19:25:30.246529    2716 system_pods.go:89] "kube-scheduler-multinode-841000" [67374ab1-2ea0-4b43-82b8-1b666d274f2f] Running
	I0415 19:25:30.246529    2716 system_pods.go:89] "storage-provisioner" [d93f9b0a-834d-4028-ae0d-5e1287ef5b9e] Running
	I0415 19:25:30.246590    2716 system_pods.go:126] duration metric: took 218.5033ms to wait for k8s-apps to be running ...
	I0415 19:25:30.246648    2716 system_svc.go:44] waiting for kubelet service to be running ....
	I0415 19:25:30.261020    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 19:25:30.290310    2716 system_svc.go:56] duration metric: took 43.72ms WaitForService to wait for kubelet
	I0415 19:25:30.290310    2716 kubeadm.go:576] duration metric: took 17.8553016s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 19:25:30.290441    2716 node_conditions.go:102] verifying NodePressure condition ...
	I0415 19:25:30.413685    2716 request.go:629] Waited for 122.9101ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.237:8443/api/v1/nodes
	I0415 19:25:30.413969    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes
	I0415 19:25:30.413969    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:30.413969    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:30.413969    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:30.417798    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:30.417798    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:30.417798    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:30.417798    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:30.417798    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:30.417798    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:30 GMT
	I0415 19:25:30.417798    2716 round_trippers.go:580]     Audit-Id: c19975c5-ae6b-43e5-9cdf-995055fceb8b
	I0415 19:25:30.417798    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:30.418457    2716 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"467"},"items":[{"metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"465","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"mana
gedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1" [truncated 5019 chars]
	I0415 19:25:30.418990    2716 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0415 19:25:30.419129    2716 node_conditions.go:123] node cpu capacity is 2
	I0415 19:25:30.419197    2716 node_conditions.go:105] duration metric: took 128.7544ms to run NodePressure ...
	I0415 19:25:30.419197    2716 start.go:240] waiting for startup goroutines ...
	I0415 19:25:30.419197    2716 start.go:245] waiting for cluster config update ...
	I0415 19:25:30.419197    2716 start.go:254] writing updated cluster config ...
	I0415 19:25:30.423355    2716 out.go:177] 
	I0415 19:25:30.433747    2716 config.go:182] Loaded profile config "multinode-841000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 19:25:30.433747    2716 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\config.json ...
	I0415 19:25:30.441803    2716 out.go:177] * Starting "multinode-841000-m02" worker node in "multinode-841000" cluster
	I0415 19:25:30.444513    2716 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 19:25:30.444513    2716 cache.go:56] Caching tarball of preloaded images
	I0415 19:25:30.444725    2716 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 19:25:30.444725    2716 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 19:25:30.444725    2716 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\config.json ...
	I0415 19:25:30.448717    2716 start.go:360] acquireMachinesLock for multinode-841000-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 19:25:30.448717    2716 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-841000-m02"
	I0415 19:25:30.448717    2716 start.go:93] Provisioning new machine with config: &{Name:multinode-841000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.3 ClusterName:multinode-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.62.237 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0415 19:25:30.448717    2716 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0415 19:25:30.451655    2716 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 19:25:30.452652    2716 start.go:159] libmachine.API.Create for "multinode-841000" (driver="hyperv")
	I0415 19:25:30.452652    2716 client.go:168] LocalClient.Create starting
	I0415 19:25:30.452652    2716 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0415 19:25:30.452652    2716 main.go:141] libmachine: Decoding PEM data...
	I0415 19:25:30.452652    2716 main.go:141] libmachine: Parsing certificate...
	I0415 19:25:30.452652    2716 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0415 19:25:30.452652    2716 main.go:141] libmachine: Decoding PEM data...
	I0415 19:25:30.452652    2716 main.go:141] libmachine: Parsing certificate...
	I0415 19:25:30.452652    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0415 19:25:32.484504    2716 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0415 19:25:32.485123    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:32.485123    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0415 19:25:34.315782    2716 main.go:141] libmachine: [stdout =====>] : False
	
	I0415 19:25:34.315848    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:34.315848    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 19:25:35.924286    2716 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 19:25:35.924286    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:35.924649    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 19:25:39.877176    2716 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 19:25:39.877451    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:39.880044    2716 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0415 19:25:40.414681    2716 main.go:141] libmachine: Creating SSH key...
	I0415 19:25:40.681566    2716 main.go:141] libmachine: Creating VM...
	I0415 19:25:40.681566    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 19:25:43.813744    2716 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 19:25:43.813744    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:43.814140    2716 main.go:141] libmachine: Using switch "Default Switch"
	I0415 19:25:43.814524    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 19:25:45.695906    2716 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 19:25:45.696338    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:45.696338    2716 main.go:141] libmachine: Creating VHD
	I0415 19:25:45.696533    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0415 19:25:49.717524    2716 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : B52D9905-E0B9-4EC9-BCF9-7C8D0946F959
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0415 19:25:49.717524    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:49.718564    2716 main.go:141] libmachine: Writing magic tar header
	I0415 19:25:49.718601    2716 main.go:141] libmachine: Writing SSH key tar header
	I0415 19:25:49.728605    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0415 19:25:53.119108    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:25:53.119108    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:53.119630    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m02\disk.vhd' -SizeBytes 20000MB
	I0415 19:25:55.876757    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:25:55.876757    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:55.876757    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-841000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0415 19:25:59.821865    2716 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-841000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0415 19:25:59.822419    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:59.822485    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-841000-m02 -DynamicMemoryEnabled $false
	I0415 19:26:02.285447    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:26:02.285447    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:02.285447    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-841000-m02 -Count 2
	I0415 19:26:04.637133    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:26:04.637133    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:04.637133    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-841000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m02\boot2docker.iso'
	I0415 19:26:07.432080    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:26:07.432080    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:07.432080    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-841000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m02\disk.vhd'
	I0415 19:26:10.317032    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:26:10.318096    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:10.318096    2716 main.go:141] libmachine: Starting VM...
	I0415 19:26:10.318147    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-841000-m02
	I0415 19:26:13.637619    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:26:13.637619    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:13.637619    2716 main.go:141] libmachine: Waiting for host to start...
	I0415 19:26:13.637963    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:26:16.139550    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:26:16.140498    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:16.140498    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:26:18.868904    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:26:18.868904    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:19.884310    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:26:22.305674    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:26:22.305674    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:22.305832    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:26:25.067708    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:26:25.067708    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:26.075215    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:26:28.515976    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:26:28.515976    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:28.516755    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:26:31.291431    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:26:31.291431    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:32.306916    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:26:34.733876    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:26:34.733876    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:34.734808    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:26:37.458005    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:26:37.458650    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:38.462177    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:26:40.885499    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:26:40.885499    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:40.885499    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:26:43.720556    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:26:43.720556    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:43.720556    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:26:46.106444    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:26:46.107194    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:46.107194    2716 machine.go:94] provisionDockerMachine start ...
	I0415 19:26:46.107300    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:26:48.465768    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:26:48.465768    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:48.465768    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:26:51.247899    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:26:51.248957    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:51.255858    2716 main.go:141] libmachine: Using SSH client type: native
	I0415 19:26:51.264764    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.55.167 22 <nil> <nil>}
	I0415 19:26:51.264764    2716 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 19:26:51.416489    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0415 19:26:51.416489    2716 buildroot.go:166] provisioning hostname "multinode-841000-m02"
	I0415 19:26:51.416489    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:26:53.746912    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:26:53.747745    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:53.747745    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:26:56.518120    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:26:56.518730    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:56.525102    2716 main.go:141] libmachine: Using SSH client type: native
	I0415 19:26:56.525728    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.55.167 22 <nil> <nil>}
	I0415 19:26:56.525728    2716 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-841000-m02 && echo "multinode-841000-m02" | sudo tee /etc/hostname
	I0415 19:26:56.692642    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-841000-m02
	
	I0415 19:26:56.692766    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:26:58.998121    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:26:58.998121    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:58.998121    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:27:01.732214    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:27:01.732214    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:01.739502    2716 main.go:141] libmachine: Using SSH client type: native
	I0415 19:27:01.740235    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.55.167 22 <nil> <nil>}
	I0415 19:27:01.740235    2716 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-841000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-841000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-841000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 19:27:01.897396    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 19:27:01.897462    2716 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0415 19:27:01.897462    2716 buildroot.go:174] setting up certificates
	I0415 19:27:01.897462    2716 provision.go:84] configureAuth start
	I0415 19:27:01.897462    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:27:04.195532    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:27:04.195532    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:04.195532    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:27:06.956088    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:27:06.956088    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:06.957124    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:27:09.284697    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:27:09.284697    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:09.285465    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:27:12.052276    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:27:12.053064    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:12.053064    2716 provision.go:143] copyHostCerts
	I0415 19:27:12.053064    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0415 19:27:12.053064    2716 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0415 19:27:12.053064    2716 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0415 19:27:12.054089    2716 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0415 19:27:12.054810    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0415 19:27:12.055514    2716 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0415 19:27:12.055577    2716 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0415 19:27:12.055577    2716 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0415 19:27:12.056814    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0415 19:27:12.057065    2716 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0415 19:27:12.057265    2716 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0415 19:27:12.057440    2716 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0415 19:27:12.058060    2716 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-841000-m02 san=[127.0.0.1 172.19.55.167 localhost minikube multinode-841000-m02]
	I0415 19:27:12.345155    2716 provision.go:177] copyRemoteCerts
	I0415 19:27:12.358149    2716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 19:27:12.359154    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:27:14.692284    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:27:14.692284    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:14.693224    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:27:17.471628    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:27:17.471628    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:17.472723    2716 sshutil.go:53] new ssh client: &{IP:172.19.55.167 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m02\id_rsa Username:docker}
	I0415 19:27:17.585687    2716 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.2274964s)
	I0415 19:27:17.585687    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0415 19:27:17.586690    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0415 19:27:17.637828    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0415 19:27:17.638819    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 19:27:17.688236    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0415 19:27:17.688236    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0415 19:27:17.736238    2716 provision.go:87] duration metric: took 15.8386475s to configureAuth
	I0415 19:27:17.736312    2716 buildroot.go:189] setting minikube options for container-runtime
	I0415 19:27:17.736889    2716 config.go:182] Loaded profile config "multinode-841000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 19:27:17.736998    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:27:20.064330    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:27:20.064991    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:20.065106    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:27:22.840388    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:27:22.840388    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:22.847718    2716 main.go:141] libmachine: Using SSH client type: native
	I0415 19:27:22.848418    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.55.167 22 <nil> <nil>}
	I0415 19:27:22.848418    2716 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 19:27:22.997580    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0415 19:27:22.997580    2716 buildroot.go:70] root file system type: tmpfs
	I0415 19:27:22.997859    2716 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 19:27:22.998019    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:27:25.303520    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:27:25.303520    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:25.303520    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:27:28.140886    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:27:28.140886    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:28.147106    2716 main.go:141] libmachine: Using SSH client type: native
	I0415 19:27:28.147868    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.55.167 22 <nil> <nil>}
	I0415 19:27:28.147868    2716 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.62.237"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 19:27:28.330556    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.62.237
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 19:27:28.330556    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:27:30.713135    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:27:30.713135    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:30.713507    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:27:33.501213    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:27:33.501213    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:33.508843    2716 main.go:141] libmachine: Using SSH client type: native
	I0415 19:27:33.509550    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.55.167 22 <nil> <nil>}
	I0415 19:27:33.509550    2716 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 19:27:35.731283    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0415 19:27:35.731283    2716 machine.go:97] duration metric: took 49.6236878s to provisionDockerMachine
	I0415 19:27:35.731721    2716 client.go:171] duration metric: took 2m5.278048s to LocalClient.Create
	I0415 19:27:35.731721    2716 start.go:167] duration metric: took 2m5.278048s to libmachine.API.Create "multinode-841000"
	I0415 19:27:35.731721    2716 start.go:293] postStartSetup for "multinode-841000-m02" (driver="hyperv")
	I0415 19:27:35.731721    2716 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 19:27:35.746795    2716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 19:27:35.746795    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:27:38.048338    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:27:38.048448    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:38.048631    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:27:40.817973    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:27:40.818110    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:40.818110    2716 sshutil.go:53] new ssh client: &{IP:172.19.55.167 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m02\id_rsa Username:docker}
	I0415 19:27:40.928323    2716 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1814858s)
	I0415 19:27:40.944198    2716 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 19:27:40.950768    2716 command_runner.go:130] > NAME=Buildroot
	I0415 19:27:40.950768    2716 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0415 19:27:40.950768    2716 command_runner.go:130] > ID=buildroot
	I0415 19:27:40.950768    2716 command_runner.go:130] > VERSION_ID=2023.02.9
	I0415 19:27:40.950768    2716 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0415 19:27:40.950880    2716 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 19:27:40.950959    2716 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0415 19:27:40.951396    2716 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0415 19:27:40.952411    2716 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> 112722.pem in /etc/ssl/certs
	I0415 19:27:40.952411    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /etc/ssl/certs/112722.pem
	I0415 19:27:40.966384    2716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 19:27:40.986069    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /etc/ssl/certs/112722.pem (1708 bytes)
	I0415 19:27:41.037354    2716 start.go:296] duration metric: took 5.3055899s for postStartSetup
	I0415 19:27:41.040184    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:27:43.390814    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:27:43.390814    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:43.391031    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:27:46.169335    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:27:46.169335    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:46.169785    2716 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\config.json ...
	I0415 19:27:46.172760    2716 start.go:128] duration metric: took 2m15.7229367s to createHost
	I0415 19:27:46.172913    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:27:48.496523    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:27:48.496523    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:48.496789    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:27:51.266508    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:27:51.266508    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:51.276633    2716 main.go:141] libmachine: Using SSH client type: native
	I0415 19:27:51.277277    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.55.167 22 <nil> <nil>}
	I0415 19:27:51.277277    2716 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0415 19:27:51.418809    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713209271.420392987
	
	I0415 19:27:51.418961    2716 fix.go:216] guest clock: 1713209271.420392987
	I0415 19:27:51.418961    2716 fix.go:229] Guest: 2024-04-15 19:27:51.420392987 +0000 UTC Remote: 2024-04-15 19:27:46.1728414 +0000 UTC m=+366.295365001 (delta=5.247551587s)
	I0415 19:27:51.419072    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:27:53.750298    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:27:53.750851    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:53.750851    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:27:56.547503    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:27:56.547503    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:56.554566    2716 main.go:141] libmachine: Using SSH client type: native
	I0415 19:27:56.554566    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.55.167 22 <nil> <nil>}
	I0415 19:27:56.555506    2716 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713209271
	I0415 19:27:56.714182    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 15 19:27:51 UTC 2024
	
	I0415 19:27:56.714182    2716 fix.go:236] clock set: Mon Apr 15 19:27:51 UTC 2024
	 (err=<nil>)
	I0415 19:27:56.714182    2716 start.go:83] releasing machines lock for "multinode-841000-m02", held for 2m26.264273s
	I0415 19:27:56.715141    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:27:59.007019    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:27:59.008018    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:59.008111    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:28:01.759720    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:28:01.759720    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:28:01.763252    2716 out.go:177] * Found network options:
	I0415 19:28:01.766577    2716 out.go:177]   - NO_PROXY=172.19.62.237
	W0415 19:28:01.771275    2716 proxy.go:119] fail to check proxy env: Error ip not in block
	I0415 19:28:01.774032    2716 out.go:177]   - NO_PROXY=172.19.62.237
	W0415 19:28:01.775746    2716 proxy.go:119] fail to check proxy env: Error ip not in block
	W0415 19:28:01.776486    2716 proxy.go:119] fail to check proxy env: Error ip not in block
	I0415 19:28:01.779879    2716 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 19:28:01.779879    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:28:01.793243    2716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0415 19:28:01.793243    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:28:04.167298    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:28:04.167298    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:28:04.167393    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:28:04.167549    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:28:04.167619    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:28:04.167619    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:28:06.989044    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:28:06.989044    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:28:06.989044    2716 sshutil.go:53] new ssh client: &{IP:172.19.55.167 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m02\id_rsa Username:docker}
	I0415 19:28:07.022259    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:28:07.022780    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:28:07.022780    2716 sshutil.go:53] new ssh client: &{IP:172.19.55.167 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m02\id_rsa Username:docker}
	I0415 19:28:07.155974    2716 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0415 19:28:07.155974    2716 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.376051s)
	I0415 19:28:07.155974    2716 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0415 19:28:07.155974    2716 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.3626872s)
	W0415 19:28:07.155974    2716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 19:28:07.170020    2716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 19:28:07.201143    2716 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0415 19:28:07.201427    2716 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 19:28:07.201427    2716 start.go:494] detecting cgroup driver to use...
	I0415 19:28:07.201703    2716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 19:28:07.241005    2716 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0415 19:28:07.255355    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0415 19:28:07.291743    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 19:28:07.311572    2716 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 19:28:07.326255    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 19:28:07.358979    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 19:28:07.394832    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 19:28:07.433543    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 19:28:07.469002    2716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 19:28:07.504081    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 19:28:07.540876    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 19:28:07.577024    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 19:28:07.614539    2716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 19:28:07.636285    2716 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0415 19:28:07.650120    2716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 19:28:07.685591    2716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:28:07.911661    2716 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 19:28:07.946495    2716 start.go:494] detecting cgroup driver to use...
	I0415 19:28:07.961870    2716 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0415 19:28:07.990111    2716 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0415 19:28:07.990170    2716 command_runner.go:130] > [Unit]
	I0415 19:28:07.990170    2716 command_runner.go:130] > Description=Docker Application Container Engine
	I0415 19:28:07.990170    2716 command_runner.go:130] > Documentation=https://docs.docker.com
	I0415 19:28:07.990170    2716 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0415 19:28:07.990170    2716 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0415 19:28:07.990170    2716 command_runner.go:130] > StartLimitBurst=3
	I0415 19:28:07.990170    2716 command_runner.go:130] > StartLimitIntervalSec=60
	I0415 19:28:07.990170    2716 command_runner.go:130] > [Service]
	I0415 19:28:07.990170    2716 command_runner.go:130] > Type=notify
	I0415 19:28:07.990170    2716 command_runner.go:130] > Restart=on-failure
	I0415 19:28:07.990170    2716 command_runner.go:130] > Environment=NO_PROXY=172.19.62.237
	I0415 19:28:07.990170    2716 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0415 19:28:07.990170    2716 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0415 19:28:07.990170    2716 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0415 19:28:07.990170    2716 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0415 19:28:07.990170    2716 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0415 19:28:07.990170    2716 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0415 19:28:07.990170    2716 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0415 19:28:07.990170    2716 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0415 19:28:07.990170    2716 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0415 19:28:07.990170    2716 command_runner.go:130] > ExecStart=
	I0415 19:28:07.990170    2716 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0415 19:28:07.990170    2716 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0415 19:28:07.990170    2716 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0415 19:28:07.990170    2716 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0415 19:28:07.990170    2716 command_runner.go:130] > LimitNOFILE=infinity
	I0415 19:28:07.990170    2716 command_runner.go:130] > LimitNPROC=infinity
	I0415 19:28:07.990170    2716 command_runner.go:130] > LimitCORE=infinity
	I0415 19:28:07.990170    2716 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0415 19:28:07.990170    2716 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0415 19:28:07.990170    2716 command_runner.go:130] > TasksMax=infinity
	I0415 19:28:07.990170    2716 command_runner.go:130] > TimeoutStartSec=0
	I0415 19:28:07.990170    2716 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0415 19:28:07.990170    2716 command_runner.go:130] > Delegate=yes
	I0415 19:28:07.990170    2716 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0415 19:28:07.990766    2716 command_runner.go:130] > KillMode=process
	I0415 19:28:07.990766    2716 command_runner.go:130] > [Install]
	I0415 19:28:07.990766    2716 command_runner.go:130] > WantedBy=multi-user.target
	I0415 19:28:08.005229    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 19:28:08.043923    2716 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 19:28:08.098547    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 19:28:08.141362    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 19:28:08.183554    2716 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0415 19:28:08.256797    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 19:28:08.285417    2716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 19:28:08.323205    2716 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0415 19:28:08.337558    2716 ssh_runner.go:195] Run: which cri-dockerd
	I0415 19:28:08.344700    2716 command_runner.go:130] > /usr/bin/cri-dockerd
	I0415 19:28:08.359602    2716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0415 19:28:08.379434    2716 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0415 19:28:08.432111    2716 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0415 19:28:08.657315    2716 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0415 19:28:08.866222    2716 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0415 19:28:08.866222    2716 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0415 19:28:08.917477    2716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:28:09.144520    2716 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 19:28:11.709306    2716 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5647653s)
	I0415 19:28:11.723184    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0415 19:28:11.761181    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 19:28:11.802747    2716 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0415 19:28:12.016577    2716 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0415 19:28:12.230646    2716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:28:12.451428    2716 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0415 19:28:12.498470    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 19:28:12.539510    2716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:28:12.767354    2716 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0415 19:28:12.899469    2716 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0415 19:28:12.915466    2716 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0415 19:28:12.926277    2716 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0415 19:28:12.926277    2716 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0415 19:28:12.926277    2716 command_runner.go:130] > Device: 0,22	Inode: 871         Links: 1
	I0415 19:28:12.926277    2716 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0415 19:28:12.926277    2716 command_runner.go:130] > Access: 2024-04-15 19:28:12.801135602 +0000
	I0415 19:28:12.926277    2716 command_runner.go:130] > Modify: 2024-04-15 19:28:12.801135602 +0000
	I0415 19:28:12.926277    2716 command_runner.go:130] > Change: 2024-04-15 19:28:12.804135626 +0000
	I0415 19:28:12.926277    2716 command_runner.go:130] >  Birth: -
	I0415 19:28:12.926277    2716 start.go:562] Will wait 60s for crictl version
	I0415 19:28:12.941258    2716 ssh_runner.go:195] Run: which crictl
	I0415 19:28:12.948276    2716 command_runner.go:130] > /usr/bin/crictl
	I0415 19:28:12.965585    2716 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0415 19:28:13.025774    2716 command_runner.go:130] > Version:  0.1.0
	I0415 19:28:13.025867    2716 command_runner.go:130] > RuntimeName:  docker
	I0415 19:28:13.025867    2716 command_runner.go:130] > RuntimeVersion:  26.0.0
	I0415 19:28:13.025867    2716 command_runner.go:130] > RuntimeApiVersion:  v1
	I0415 19:28:13.025999    2716 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0415 19:28:13.037127    2716 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 19:28:13.077162    2716 command_runner.go:130] > 26.0.0
	I0415 19:28:13.087163    2716 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 19:28:13.119653    2716 command_runner.go:130] > 26.0.0
	I0415 19:28:13.126089    2716 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0415 19:28:13.130037    2716 out.go:177]   - env NO_PROXY=172.19.62.237
	I0415 19:28:13.132042    2716 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0415 19:28:13.136031    2716 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0415 19:28:13.136031    2716 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0415 19:28:13.136031    2716 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0415 19:28:13.136031    2716 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:d7:0e Flags:up|broadcast|multicast|running}
	I0415 19:28:13.139073    2716 ip.go:210] interface addr: fe80::6b0:6318:bc6e:fcda/64
	I0415 19:28:13.139073    2716 ip.go:210] interface addr: 172.19.48.1/20
	I0415 19:28:13.154034    2716 ssh_runner.go:195] Run: grep 172.19.48.1	host.minikube.internal$ /etc/hosts
	I0415 19:28:13.161944    2716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 19:28:13.185520    2716 mustload.go:65] Loading cluster: multinode-841000
	I0415 19:28:13.186047    2716 config.go:182] Loaded profile config "multinode-841000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 19:28:13.186241    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:28:15.494144    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:28:15.494144    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:28:15.494144    2716 host.go:66] Checking if "multinode-841000" exists ...
	I0415 19:28:15.494936    2716 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000 for IP: 172.19.55.167
	I0415 19:28:15.494936    2716 certs.go:194] generating shared ca certs ...
	I0415 19:28:15.494936    2716 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 19:28:15.495704    2716 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0415 19:28:15.495704    2716 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0415 19:28:15.496331    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0415 19:28:15.496374    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0415 19:28:15.496374    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0415 19:28:15.496899    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0415 19:28:15.497123    2716 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem (1338 bytes)
	W0415 19:28:15.497834    2716 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272_empty.pem, impossibly tiny 0 bytes
	I0415 19:28:15.497834    2716 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0415 19:28:15.497834    2716 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0415 19:28:15.498355    2716 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0415 19:28:15.498560    2716 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0415 19:28:15.499080    2716 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem (1708 bytes)
	I0415 19:28:15.499173    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem -> /usr/share/ca-certificates/11272.pem
	I0415 19:28:15.499173    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /usr/share/ca-certificates/112722.pem
	I0415 19:28:15.499173    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0415 19:28:15.499901    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0415 19:28:15.552725    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0415 19:28:15.606153    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0415 19:28:15.659105    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0415 19:28:15.714653    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem --> /usr/share/ca-certificates/11272.pem (1338 bytes)
	I0415 19:28:15.764226    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /usr/share/ca-certificates/112722.pem (1708 bytes)
	I0415 19:28:15.816405    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0415 19:28:15.882737    2716 ssh_runner.go:195] Run: openssl version
	I0415 19:28:15.892015    2716 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0415 19:28:15.906922    2716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112722.pem && ln -fs /usr/share/ca-certificates/112722.pem /etc/ssl/certs/112722.pem"
	I0415 19:28:15.947524    2716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112722.pem
	I0415 19:28:15.955287    2716 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 15 17:58 /usr/share/ca-certificates/112722.pem
	I0415 19:28:15.955287    2716 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 17:58 /usr/share/ca-certificates/112722.pem
	I0415 19:28:15.972221    2716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112722.pem
	I0415 19:28:15.981810    2716 command_runner.go:130] > 3ec20f2e
	I0415 19:28:15.997140    2716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112722.pem /etc/ssl/certs/3ec20f2e.0"
	I0415 19:28:16.033108    2716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0415 19:28:16.069127    2716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0415 19:28:16.078479    2716 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 15 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I0415 19:28:16.079106    2716 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I0415 19:28:16.094998    2716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0415 19:28:16.106524    2716 command_runner.go:130] > b5213941
	I0415 19:28:16.120645    2716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0415 19:28:16.156773    2716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11272.pem && ln -fs /usr/share/ca-certificates/11272.pem /etc/ssl/certs/11272.pem"
	I0415 19:28:16.195649    2716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11272.pem
	I0415 19:28:16.204033    2716 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 15 17:58 /usr/share/ca-certificates/11272.pem
	I0415 19:28:16.204159    2716 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 17:58 /usr/share/ca-certificates/11272.pem
	I0415 19:28:16.218358    2716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11272.pem
	I0415 19:28:16.227375    2716 command_runner.go:130] > 51391683
	I0415 19:28:16.245562    2716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11272.pem /etc/ssl/certs/51391683.0"
	I0415 19:28:16.284626    2716 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 19:28:16.291591    2716 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0415 19:28:16.292009    2716 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0415 19:28:16.292580    2716 kubeadm.go:928] updating node {m02 172.19.55.167 8443 v1.29.3 docker false true} ...
	I0415 19:28:16.292694    2716 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-841000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.55.167
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0415 19:28:16.305779    2716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0415 19:28:16.325845    2716 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	I0415 19:28:16.326145    2716 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0415 19:28:16.338518    2716 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0415 19:28:16.360611    2716 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256
	I0415 19:28:16.360611    2716 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256
	I0415 19:28:16.360611    2716 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0415 19:28:16.360611    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0415 19:28:16.360611    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0415 19:28:16.378597    2716 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0415 19:28:16.379612    2716 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0415 19:28:16.379612    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 19:28:16.385615    2716 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0415 19:28:16.386906    2716 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0415 19:28:16.387193    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0415 19:28:16.388167    2716 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0415 19:28:16.388462    2716 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0415 19:28:16.389410    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0415 19:28:16.448092    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0415 19:28:16.462829    2716 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0415 19:28:16.580765    2716 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0415 19:28:16.588812    2716 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0415 19:28:16.589035    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0415 19:28:17.818727    2716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0415 19:28:17.839758    2716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0415 19:28:17.876852    2716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0415 19:28:17.928267    2716 ssh_runner.go:195] Run: grep 172.19.62.237	control-plane.minikube.internal$ /etc/hosts
	I0415 19:28:17.935629    2716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.62.237	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 19:28:17.984995    2716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:28:18.210647    2716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 19:28:18.245059    2716 host.go:66] Checking if "multinode-841000" exists ...
	I0415 19:28:18.245372    2716 start.go:316] joinCluster: &{Name:multinode-841000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:multinode-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.62.237 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.55.167 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 19:28:18.245960    2716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0415 19:28:18.246134    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:28:20.605265    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:28:20.605265    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:28:20.606327    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:28:23.415032    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:28:23.415032    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:28:23.415826    2716 sshutil.go:53] new ssh client: &{IP:172.19.62.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000\id_rsa Username:docker}
	I0415 19:28:23.636097    2716 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 84gkie.kgltgtunor74f8b0 --discovery-token-ca-cert-hash sha256:64433043496e1f3289bbcb278fdd01bdcea4f93c3a13feee16fa4e3db23f59c2 
	I0415 19:28:23.636097    2716 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0": (5.3900936s)
	I0415 19:28:23.636097    2716 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.19.55.167 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0415 19:28:23.636097    2716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 84gkie.kgltgtunor74f8b0 --discovery-token-ca-cert-hash sha256:64433043496e1f3289bbcb278fdd01bdcea4f93c3a13feee16fa4e3db23f59c2 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-841000-m02"
	I0415 19:28:23.886868    2716 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0415 19:28:25.749478    2716 command_runner.go:130] > [preflight] Running pre-flight checks
	I0415 19:28:25.750148    2716 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0415 19:28:25.750148    2716 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0415 19:28:25.750250    2716 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0415 19:28:25.750250    2716 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0415 19:28:25.750250    2716 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0415 19:28:25.750250    2716 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0415 19:28:25.750319    2716 command_runner.go:130] > This node has joined the cluster:
	I0415 19:28:25.750319    2716 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0415 19:28:25.750319    2716 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0415 19:28:25.750319    2716 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0415 19:28:25.750380    2716 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 84gkie.kgltgtunor74f8b0 --discovery-token-ca-cert-hash sha256:64433043496e1f3289bbcb278fdd01bdcea4f93c3a13feee16fa4e3db23f59c2 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-841000-m02": (2.1142655s)
	I0415 19:28:25.750455    2716 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0415 19:28:26.017314    2716 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0415 19:28:26.263462    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-841000-m02 minikube.k8s.io/updated_at=2024_04_15T19_28_26_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c minikube.k8s.io/name=multinode-841000 minikube.k8s.io/primary=false
	I0415 19:28:26.401336    2716 command_runner.go:130] > node/multinode-841000-m02 labeled
	I0415 19:28:26.401476    2716 start.go:318] duration metric: took 8.1560374s to joinCluster
	I0415 19:28:26.401476    2716 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.19.55.167 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0415 19:28:26.404478    2716 out.go:177] * Verifying Kubernetes components...
	I0415 19:28:26.402031    2716 config.go:182] Loaded profile config "multinode-841000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 19:28:26.422932    2716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:28:26.672115    2716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 19:28:26.700599    2716 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 19:28:26.701122    2716 kapi.go:59] client config for multinode-841000: &rest.Config{Host:"https://172.19.62.237:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-841000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-841000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f71600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0415 19:28:26.702127    2716 node_ready.go:35] waiting up to 6m0s for node "multinode-841000-m02" to be "Ready" ...
	I0415 19:28:26.702127    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:26.702127    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:26.702127    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:26.702127    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:26.716133    2716 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0415 19:28:26.716254    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:26.716254    2716 round_trippers.go:580]     Audit-Id: de199570-7367-4ac9-9137-154f849d564e
	I0415 19:28:26.716254    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:26.716254    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:26.716254    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:26.716254    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:26.716254    2716 round_trippers.go:580]     Content-Length: 3927
	I0415 19:28:26.716254    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:26 GMT
	I0415 19:28:26.716254    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"635","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl" [truncated 2903 chars]
	I0415 19:28:27.210993    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:27.211084    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:27.211084    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:27.211084    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:27.214446    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:27.214446    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:27.214446    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:27 GMT
	I0415 19:28:27.215078    2716 round_trippers.go:580]     Audit-Id: e1ced4c7-3bfd-4e2a-b6d3-9cba34ebc436
	I0415 19:28:27.215078    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:27.215078    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:27.215078    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:27.215078    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:27.215137    2716 round_trippers.go:580]     Content-Length: 4036
	I0415 19:28:27.215188    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"638","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3012 chars]
	I0415 19:28:27.710401    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:27.710605    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:27.710605    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:27.710670    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:27.716863    2716 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 19:28:27.716863    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:27.716943    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:27.716943    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:27.716943    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:27.716943    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:27.716943    2716 round_trippers.go:580]     Content-Length: 4036
	I0415 19:28:27.716943    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:27 GMT
	I0415 19:28:27.716943    2716 round_trippers.go:580]     Audit-Id: c3f0e237-b9e1-4e1b-a66f-0c8075c37bab
	I0415 19:28:27.717154    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"638","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3012 chars]
	I0415 19:28:28.208081    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:28.208159    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:28.208159    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:28.208159    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:28.215731    2716 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0415 19:28:28.215843    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:28.215843    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:28.215902    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:28.215902    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:28.215902    2716 round_trippers.go:580]     Content-Length: 4036
	I0415 19:28:28.215953    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:28 GMT
	I0415 19:28:28.215953    2716 round_trippers.go:580]     Audit-Id: 2a6ebd9c-e7ac-4996-99b2-d60d337f9561
	I0415 19:28:28.215953    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:28.216164    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"638","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3012 chars]
	I0415 19:28:28.709525    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:28.709525    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:28.709525    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:28.709525    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:28.713187    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:28.713187    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:28.713187    2716 round_trippers.go:580]     Content-Length: 4036
	I0415 19:28:28.713187    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:28 GMT
	I0415 19:28:28.713896    2716 round_trippers.go:580]     Audit-Id: 70ed5e05-c2de-459f-8b27-d22241dcdbcd
	I0415 19:28:28.713896    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:28.713896    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:28.713896    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:28.713896    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:28.714088    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"638","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3012 chars]
	I0415 19:28:28.714204    2716 node_ready.go:53] node "multinode-841000-m02" has status "Ready":"False"
	I0415 19:28:29.209783    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:29.209783    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:29.209783    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:29.209783    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:29.214392    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:28:29.214392    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:29.214392    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:29 GMT
	I0415 19:28:29.214392    2716 round_trippers.go:580]     Audit-Id: 03d636ca-0936-469f-8f91-3f96b54df795
	I0415 19:28:29.214567    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:29.214567    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:29.214567    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:29.214567    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:29.214611    2716 round_trippers.go:580]     Content-Length: 4036
	I0415 19:28:29.214669    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"638","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3012 chars]
	I0415 19:28:29.708063    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:29.708119    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:29.708119    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:29.708119    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:29.711712    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:29.711712    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:29.712325    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:29.712325    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:29.712325    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:29.712325    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:29.712406    2716 round_trippers.go:580]     Content-Length: 4036
	I0415 19:28:29.712447    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:29 GMT
	I0415 19:28:29.712447    2716 round_trippers.go:580]     Audit-Id: 1cf3d0e8-46a4-412c-b09d-f8a86f5f0afa
	I0415 19:28:29.712666    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"638","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3012 chars]
	I0415 19:28:30.217193    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:30.217193    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:30.217193    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:30.217193    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:30.220927    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:30.220927    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:30.220927    2716 round_trippers.go:580]     Audit-Id: 8fcc0251-e4d1-4444-90aa-c9d488dfc088
	I0415 19:28:30.220927    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:30.220927    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:30.221806    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:30.221806    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:30.221806    2716 round_trippers.go:580]     Content-Length: 4036
	I0415 19:28:30.221852    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:30 GMT
	I0415 19:28:30.221869    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"638","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3012 chars]
	I0415 19:28:30.702637    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:30.702905    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:30.702905    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:30.702905    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:30.708237    2716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 19:28:30.709254    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:30.709286    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:30.709286    2716 round_trippers.go:580]     Content-Length: 4036
	I0415 19:28:30.709286    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:30 GMT
	I0415 19:28:30.709286    2716 round_trippers.go:580]     Audit-Id: b77d69c5-3750-4309-b763-3af292fe3c18
	I0415 19:28:30.709286    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:30.709286    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:30.709286    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:30.709402    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"638","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3012 chars]
	I0415 19:28:31.216890    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:31.216970    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:31.216970    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:31.216970    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:31.221303    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:31.221374    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:31.221483    2716 round_trippers.go:580]     Content-Length: 4036
	I0415 19:28:31.221571    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:31 GMT
	I0415 19:28:31.221853    2716 round_trippers.go:580]     Audit-Id: bf7a4364-6929-4a37-97cd-a9c3cb5b34a4
	I0415 19:28:31.221853    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:31.221853    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:31.221853    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:31.221853    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:31.221853    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"638","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3012 chars]
	I0415 19:28:31.222536    2716 node_ready.go:53] node "multinode-841000-m02" has status "Ready":"False"
	I0415 19:28:31.705205    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:31.705205    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:31.705205    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:31.705205    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:31.709296    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:28:31.709296    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:31.709689    2716 round_trippers.go:580]     Audit-Id: 76d2a746-caa4-4395-b964-078f42cf77d7
	I0415 19:28:31.709689    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:31.709689    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:31.709689    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:31.709689    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:31.709689    2716 round_trippers.go:580]     Content-Length: 4036
	I0415 19:28:31.709689    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:31 GMT
	I0415 19:28:31.709809    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"638","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3012 chars]
	I0415 19:28:32.212183    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:32.212183    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:32.212183    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:32.212183    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:32.221079    2716 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0415 19:28:32.221079    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:32.221079    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:32 GMT
	I0415 19:28:32.221079    2716 round_trippers.go:580]     Audit-Id: c0e4cd6f-4bab-4238-9cfa-49e193d7b46a
	I0415 19:28:32.221079    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:32.221079    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:32.221079    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:32.221079    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:32.221079    2716 round_trippers.go:580]     Content-Length: 4036
	I0415 19:28:32.221079    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"638","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3012 chars]
	I0415 19:28:32.702680    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:32.702720    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:32.702790    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:32.702790    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:32.707667    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:28:32.707744    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:32.707744    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:32.707744    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:32.707744    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:32.707744    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:32.707744    2716 round_trippers.go:580]     Content-Length: 4036
	I0415 19:28:32.707744    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:32 GMT
	I0415 19:28:32.707744    2716 round_trippers.go:580]     Audit-Id: 2edac6f5-1847-42ad-81a4-cc4502513e72
	I0415 19:28:32.708032    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"638","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3012 chars]
	I0415 19:28:33.206652    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:33.206652    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:33.206760    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:33.206760    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:33.211025    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:28:33.211302    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:33.211302    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:33 GMT
	I0415 19:28:33.211302    2716 round_trippers.go:580]     Audit-Id: 8f547bd6-4f43-4170-beac-1c6a6ecf3a5f
	I0415 19:28:33.211302    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:33.211302    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:33.211302    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:33.211426    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:33.211426    2716 round_trippers.go:580]     Content-Length: 4036
	I0415 19:28:33.211615    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"638","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3012 chars]
	I0415 19:28:33.710416    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:33.710416    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:33.710666    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:33.710666    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:33.714046    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:33.714789    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:33.714789    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:33 GMT
	I0415 19:28:33.714789    2716 round_trippers.go:580]     Audit-Id: 3f835fce-512f-4cf4-bce6-4518ac5e9ccc
	I0415 19:28:33.714789    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:33.714789    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:33.714789    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:33.714789    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:33.714789    2716 round_trippers.go:580]     Content-Length: 4036
	I0415 19:28:33.714905    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"638","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3012 chars]
	I0415 19:28:33.715363    2716 node_ready.go:53] node "multinode-841000-m02" has status "Ready":"False"
	I0415 19:28:34.217497    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:34.217497    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:34.217497    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:34.217497    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:34.221974    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:28:34.221974    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:34.221974    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:34.221974    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:34.221974    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:34.221974    2716 round_trippers.go:580]     Content-Length: 4036
	I0415 19:28:34.222995    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:34 GMT
	I0415 19:28:34.222995    2716 round_trippers.go:580]     Audit-Id: 0e8d9fad-d333-49e5-978d-1085326d5235
	I0415 19:28:34.222995    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:34.223187    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"638","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3012 chars]
	I0415 19:28:34.705852    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:34.705852    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:34.705852    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:34.705852    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:34.710471    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:28:34.710471    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:34.710471    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:34 GMT
	I0415 19:28:34.710471    2716 round_trippers.go:580]     Audit-Id: e118ae73-1611-45e2-a266-fc0b966092ec
	I0415 19:28:34.710471    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:34.710471    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:34.710471    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:34.710471    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:34.710471    2716 round_trippers.go:580]     Content-Length: 4036
	I0415 19:28:34.710471    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"638","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3012 chars]
	I0415 19:28:35.211738    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:35.211840    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:35.211840    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:35.211840    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:35.499070    2716 round_trippers.go:574] Response Status: 200 OK in 287 milliseconds
	I0415 19:28:35.499070    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:35.499561    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:35.499561    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:35.499561    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:35.499561    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:35.499561    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:35 GMT
	I0415 19:28:35.499561    2716 round_trippers.go:580]     Audit-Id: 0c882350-f476-4567-86a6-8f3fd8ed0867
	I0415 19:28:35.499799    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"649","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3404 chars]
	I0415 19:28:35.711186    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:35.711186    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:35.711186    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:35.711186    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:35.847232    2716 round_trippers.go:574] Response Status: 200 OK in 136 milliseconds
	I0415 19:28:35.847922    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:35.847922    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:35.847922    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:35.847922    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:35.847922    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:35 GMT
	I0415 19:28:35.847922    2716 round_trippers.go:580]     Audit-Id: e1c35e4c-ff93-4e97-b3c5-0ddbb2d3fe90
	I0415 19:28:35.847922    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:35.847922    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"649","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3404 chars]
	I0415 19:28:35.848678    2716 node_ready.go:53] node "multinode-841000-m02" has status "Ready":"False"
	I0415 19:28:36.213903    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:36.213903    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:36.213903    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:36.213903    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:36.219073    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:28:36.219073    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:36.219073    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:36.219073    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:36 GMT
	I0415 19:28:36.219073    2716 round_trippers.go:580]     Audit-Id: 05a725cb-4b91-4098-9013-a7838dbbbd38
	I0415 19:28:36.219073    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:36.219073    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:36.219073    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:36.219666    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"649","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3404 chars]
	I0415 19:28:36.702945    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:36.702945    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:36.702945    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:36.702945    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:36.706983    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:28:36.707736    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:36.707736    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:36.707736    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:36.707736    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:36.707736    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:36.707736    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:36 GMT
	I0415 19:28:36.707736    2716 round_trippers.go:580]     Audit-Id: d745f174-5628-48e3-9bfb-7361dcddc7a3
	I0415 19:28:36.707736    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"649","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3404 chars]
	I0415 19:28:37.211118    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:37.211118    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:37.211118    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:37.211118    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:37.215882    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:28:37.215882    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:37.215882    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:37 GMT
	I0415 19:28:37.215882    2716 round_trippers.go:580]     Audit-Id: 2b28b06b-0370-4a3d-a20b-818fdac09947
	I0415 19:28:37.215882    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:37.215882    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:37.215882    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:37.215882    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:37.215882    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"649","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3404 chars]
	I0415 19:28:37.716959    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:37.716959    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:37.716959    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:37.716959    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:37.720550    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:37.721411    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:37.721411    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:37.721411    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:37.721503    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:37 GMT
	I0415 19:28:37.721503    2716 round_trippers.go:580]     Audit-Id: dc1ddb2c-0dd0-4b29-a8da-cace0712f9dd
	I0415 19:28:37.721503    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:37.721545    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:37.721797    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"649","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3404 chars]
	I0415 19:28:38.206715    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:38.206900    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:38.206900    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:38.206900    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:38.214576    2716 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0415 19:28:38.215467    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:38.215467    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:38.215467    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:38 GMT
	I0415 19:28:38.215467    2716 round_trippers.go:580]     Audit-Id: 6eac6008-9656-4627-8cf1-fa0c7ec88672
	I0415 19:28:38.215467    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:38.215467    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:38.215467    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:38.215467    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"649","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3404 chars]
	I0415 19:28:38.216340    2716 node_ready.go:53] node "multinode-841000-m02" has status "Ready":"False"
	I0415 19:28:38.709462    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:38.709462    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:38.709462    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:38.709462    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:38.713979    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:28:38.713979    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:38.713979    2716 round_trippers.go:580]     Audit-Id: 31e6718e-584e-4674-ba73-77084e9af962
	I0415 19:28:38.713979    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:38.713979    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:38.713979    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:38.713979    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:38.713979    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:38 GMT
	I0415 19:28:38.714513    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"649","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3404 chars]
	I0415 19:28:39.218580    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:39.218643    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:39.218713    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:39.218713    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:39.223558    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:28:39.223558    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:39.223558    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:39.223558    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:39 GMT
	I0415 19:28:39.223558    2716 round_trippers.go:580]     Audit-Id: 623e8c9c-09d6-4f44-8aae-dfdba2378099
	I0415 19:28:39.223558    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:39.223558    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:39.223558    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:39.223558    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"649","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3404 chars]
	I0415 19:28:39.707387    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:39.707652    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:39.707652    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:39.707652    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:39.711434    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:39.711434    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:39.711434    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:39 GMT
	I0415 19:28:39.711434    2716 round_trippers.go:580]     Audit-Id: 4b06cd31-8b77-4eef-bc8f-b5f729b6e1d5
	I0415 19:28:39.712377    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:39.712377    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:39.712377    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:39.712429    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:39.712606    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"649","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3404 chars]
	I0415 19:28:40.216889    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:40.216889    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:40.216889    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:40.216889    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:40.227554    2716 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0415 19:28:40.227978    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:40.227978    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:40.227978    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:40.227978    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:40 GMT
	I0415 19:28:40.227978    2716 round_trippers.go:580]     Audit-Id: 956aa654-e4d7-4677-a08e-5f32468c768c
	I0415 19:28:40.227978    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:40.227978    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:40.228499    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"649","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3404 chars]
	I0415 19:28:40.228499    2716 node_ready.go:53] node "multinode-841000-m02" has status "Ready":"False"
	I0415 19:28:40.708640    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:40.708939    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:40.708939    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:40.708939    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:40.714534    2716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 19:28:40.714534    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:40.714534    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:40.714534    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:40.714534    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:40.714534    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:40 GMT
	I0415 19:28:40.714534    2716 round_trippers.go:580]     Audit-Id: 8a2087c9-c25d-49d9-8c14-a0450309cb48
	I0415 19:28:40.714534    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:40.719950    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"649","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3404 chars]
	I0415 19:28:41.209398    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:41.209398    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:41.209398    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:41.209398    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:41.213014    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:41.213458    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:41.213458    2716 round_trippers.go:580]     Audit-Id: 08346594-9041-4260-adbb-6946a834593a
	I0415 19:28:41.213458    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:41.213458    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:41.213458    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:41.213458    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:41.213458    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:41 GMT
	I0415 19:28:41.213458    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"649","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3404 chars]
	I0415 19:28:41.711705    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:41.711791    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:41.711791    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:41.711791    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:41.715266    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:41.715266    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:41.716039    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:41.716039    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:41.716039    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:41.716039    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:41.716039    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:41 GMT
	I0415 19:28:41.716039    2716 round_trippers.go:580]     Audit-Id: 8c06bb01-93fa-4ea7-a1c1-1ee4439b257d
	I0415 19:28:41.716324    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"649","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3404 chars]
	I0415 19:28:42.216722    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:42.216722    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:42.216722    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:42.216722    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:42.233579    2716 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0415 19:28:42.233579    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:42.233579    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:42.233579    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:42.233579    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:42.233579    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:42 GMT
	I0415 19:28:42.233579    2716 round_trippers.go:580]     Audit-Id: d553af8d-dd6d-42aa-b933-28768c26a6af
	I0415 19:28:42.233579    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:42.233579    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"649","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3404 chars]
	I0415 19:28:42.233579    2716 node_ready.go:53] node "multinode-841000-m02" has status "Ready":"False"
	I0415 19:28:42.717567    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:42.717567    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:42.717673    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:42.717673    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:42.721012    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:42.721012    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:42.721012    2716 round_trippers.go:580]     Audit-Id: 2b84ad32-3f2f-440d-9bf7-d49c4428fbcc
	I0415 19:28:42.721012    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:42.721012    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:42.721012    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:42.721012    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:42.721780    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:42 GMT
	I0415 19:28:42.722179    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"649","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3404 chars]
	I0415 19:28:43.218108    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:43.218192    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:43.218192    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:43.218192    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:43.222562    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:28:43.222562    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:43.222562    2716 round_trippers.go:580]     Audit-Id: 209d17f8-9c3b-4339-aa4d-4f96a6324ed8
	I0415 19:28:43.222562    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:43.222562    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:43.222562    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:43.222562    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:43.222562    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:43 GMT
	I0415 19:28:43.223721    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"666","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3270 chars]
	I0415 19:28:43.224242    2716 node_ready.go:49] node "multinode-841000-m02" has status "Ready":"True"
	I0415 19:28:43.224318    2716 node_ready.go:38] duration metric: took 16.5220572s for node "multinode-841000-m02" to be "Ready" ...
	I0415 19:28:43.224378    2716 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 19:28:43.224438    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods
	I0415 19:28:43.224438    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:43.224438    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:43.224526    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:43.229646    2716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 19:28:43.229646    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:43.230651    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:43.230685    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:43.230685    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:43 GMT
	I0415 19:28:43.230685    2716 round_trippers.go:580]     Audit-Id: 632df9a5-6871-45d7-ba11-3b8ee28cdfec
	I0415 19:28:43.230685    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:43.230685    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:43.232496    2716 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"669"},"items":[{"metadata":{"name":"coredns-76f75df574-vqqtx","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"5cce6545-fec3-4334-9041-de82b0e42801","resourceVersion":"460","creationTimestamp":"2024-04-15T19:25:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"83780525-0642-4265-aa15-7ef8ee4dcb17","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:25:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83780525-0642-4265-aa15-7ef8ee4dcb17\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 70426 chars]
	I0415 19:28:43.236752    2716 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-vqqtx" in "kube-system" namespace to be "Ready" ...
	I0415 19:28:43.236752    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-vqqtx
	I0415 19:28:43.236752    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:43.236752    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:43.236752    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:43.240684    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:43.240684    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:43.241149    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:43.241149    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:43.241149    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:43.241149    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:43 GMT
	I0415 19:28:43.241149    2716 round_trippers.go:580]     Audit-Id: 14f4fb54-2353-430a-8f2a-38d2a580896b
	I0415 19:28:43.241149    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:43.241403    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-vqqtx","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"5cce6545-fec3-4334-9041-de82b0e42801","resourceVersion":"460","creationTimestamp":"2024-04-15T19:25:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"83780525-0642-4265-aa15-7ef8ee4dcb17","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:25:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83780525-0642-4265-aa15-7ef8ee4dcb17\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0415 19:28:43.241526    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:28:43.241526    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:43.241526    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:43.241526    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:43.247063    2716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 19:28:43.247063    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:43.247125    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:43 GMT
	I0415 19:28:43.247125    2716 round_trippers.go:580]     Audit-Id: e1a1b9b6-fd63-442d-aed4-2a8a1b6bcb9d
	I0415 19:28:43.247125    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:43.247125    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:43.247125    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:43.247125    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:43.247474    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"465","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4966 chars]
	I0415 19:28:43.248193    2716 pod_ready.go:92] pod "coredns-76f75df574-vqqtx" in "kube-system" namespace has status "Ready":"True"
	I0415 19:28:43.248193    2716 pod_ready.go:81] duration metric: took 11.4415ms for pod "coredns-76f75df574-vqqtx" in "kube-system" namespace to be "Ready" ...
	I0415 19:28:43.248193    2716 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-841000" in "kube-system" namespace to be "Ready" ...
	I0415 19:28:43.248390    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-841000
	I0415 19:28:43.248466    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:43.248466    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:43.248466    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:43.265925    2716 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0415 19:28:43.265925    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:43.265925    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:43.265925    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:43.266877    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:43.266877    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:43.266877    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:43 GMT
	I0415 19:28:43.266877    2716 round_trippers.go:580]     Audit-Id: 7bab3292-3d0b-421f-926b-de45869519d3
	I0415 19:28:43.267035    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-841000","namespace":"kube-system","uid":"ec0b243b-fd9f-4081-82dc-532086096935","resourceVersion":"420","creationTimestamp":"2024-04-15T19:24:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.62.237:2379","kubernetes.io/config.hash":"e14f778ba3e14a3effd052cdd14002ca","kubernetes.io/config.mirror":"e14f778ba3e14a3effd052cdd14002ca","kubernetes.io/config.seen":"2024-04-15T19:24:49.499002669Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:24:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0415 19:28:43.267475    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:28:43.267581    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:43.267581    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:43.267581    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:43.271338    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:43.271338    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:43.271338    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:43.271338    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:43 GMT
	I0415 19:28:43.271338    2716 round_trippers.go:580]     Audit-Id: dfedf64e-cfee-4548-9692-b7a564c28054
	I0415 19:28:43.271338    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:43.271338    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:43.271338    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:43.271830    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"465","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4966 chars]
	I0415 19:28:43.271830    2716 pod_ready.go:92] pod "etcd-multinode-841000" in "kube-system" namespace has status "Ready":"True"
	I0415 19:28:43.271830    2716 pod_ready.go:81] duration metric: took 23.6365ms for pod "etcd-multinode-841000" in "kube-system" namespace to be "Ready" ...
	I0415 19:28:43.271830    2716 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-841000" in "kube-system" namespace to be "Ready" ...
	I0415 19:28:43.272389    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-841000
	I0415 19:28:43.272389    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:43.272559    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:43.272559    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:43.275770    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:43.275932    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:43.275932    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:43 GMT
	I0415 19:28:43.275932    2716 round_trippers.go:580]     Audit-Id: 461212d4-49b0-41c7-aab6-486c3fe219dd
	I0415 19:28:43.275932    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:43.275932    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:43.276007    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:43.276007    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:43.276066    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-841000","namespace":"kube-system","uid":"092f3aee-b99d-4e46-b42d-ae1b3e2f6c8b","resourceVersion":"419","creationTimestamp":"2024-04-15T19:24:59Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.62.237:8443","kubernetes.io/config.hash":"c06ba545f7155478447169e98c788e3f","kubernetes.io/config.mirror":"c06ba545f7155478447169e98c788e3f","kubernetes.io/config.seen":"2024-04-15T19:24:59.013465769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:24:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0415 19:28:43.276959    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:28:43.276959    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:43.276959    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:43.276959    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:43.280853    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:43.280853    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:43.280853    2716 round_trippers.go:580]     Audit-Id: ec708104-8fd3-41a8-95eb-a6c66790b9c8
	I0415 19:28:43.280853    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:43.280853    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:43.280853    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:43.280853    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:43.280853    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:43 GMT
	I0415 19:28:43.280853    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"465","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4966 chars]
	I0415 19:28:43.281857    2716 pod_ready.go:92] pod "kube-apiserver-multinode-841000" in "kube-system" namespace has status "Ready":"True"
	I0415 19:28:43.281857    2716 pod_ready.go:81] duration metric: took 10.0268ms for pod "kube-apiserver-multinode-841000" in "kube-system" namespace to be "Ready" ...
	I0415 19:28:43.281857    2716 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-841000" in "kube-system" namespace to be "Ready" ...
	I0415 19:28:43.281857    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-841000
	I0415 19:28:43.281857    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:43.281857    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:43.281857    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:43.285141    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:43.285141    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:43.285141    2716 round_trippers.go:580]     Audit-Id: 5d51c5b8-8724-499e-b1f2-f63ccbe19b15
	I0415 19:28:43.285141    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:43.285141    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:43.285141    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:43.286119    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:43.286119    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:43 GMT
	I0415 19:28:43.286119    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-841000","namespace":"kube-system","uid":"8922765c-684e-491a-83a0-e06cec665bbd","resourceVersion":"417","creationTimestamp":"2024-04-15T19:24:59Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9d43b7787e40d9d062807a067e1e26cc","kubernetes.io/config.mirror":"9d43b7787e40d9d062807a067e1e26cc","kubernetes.io/config.seen":"2024-04-15T19:24:59.013467070Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:24:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0415 19:28:43.286837    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:28:43.286837    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:43.286837    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:43.286837    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:43.289407    2716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 19:28:43.289407    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:43.289407    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:43.289407    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:43.289407    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:43.289407    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:43.290446    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:43 GMT
	I0415 19:28:43.290446    2716 round_trippers.go:580]     Audit-Id: 2c1b48e9-a61b-4f99-a456-7e5f3b9f5c34
	I0415 19:28:43.290584    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"465","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4966 chars]
	I0415 19:28:43.290869    2716 pod_ready.go:92] pod "kube-controller-manager-multinode-841000" in "kube-system" namespace has status "Ready":"True"
	I0415 19:28:43.290989    2716 pod_ready.go:81] duration metric: took 9.1322ms for pod "kube-controller-manager-multinode-841000" in "kube-system" namespace to be "Ready" ...
	I0415 19:28:43.290989    2716 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7v79z" in "kube-system" namespace to be "Ready" ...
	I0415 19:28:43.420912    2716 request.go:629] Waited for 129.4443ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7v79z
	I0415 19:28:43.421046    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7v79z
	I0415 19:28:43.421046    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:43.421046    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:43.421046    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:43.425434    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:28:43.425434    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:43.425434    2716 round_trippers.go:580]     Audit-Id: cfb9c58b-b8f5-4d5f-809e-cf190b11fef0
	I0415 19:28:43.425434    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:43.425434    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:43.425434    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:43.425434    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:43.425434    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:43 GMT
	I0415 19:28:43.426273    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7v79z","generateName":"kube-proxy-","namespace":"kube-system","uid":"0a08abf8-9fa3-4fab-86cc-1b709bc0d263","resourceVersion":"414","creationTimestamp":"2024-04-15T19:25:12Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c07d15d0-ec90-403c-8aa0-1c81c17e9eec","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:25:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c07d15d0-ec90-403c-8aa0-1c81c17e9eec\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0415 19:28:43.625724    2716 request.go:629] Waited for 198.2211ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:28:43.625805    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:28:43.625805    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:43.625892    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:43.625892    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:43.629742    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:43.629742    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:43.630767    2716 round_trippers.go:580]     Audit-Id: ed530d90-14b1-49e2-9b3a-3486a68617cf
	I0415 19:28:43.630767    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:43.630767    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:43.630767    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:43.630767    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:43.630767    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:43 GMT
	I0415 19:28:43.631016    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"465","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4966 chars]
	I0415 19:28:43.631471    2716 pod_ready.go:92] pod "kube-proxy-7v79z" in "kube-system" namespace has status "Ready":"True"
	I0415 19:28:43.631581    2716 pod_ready.go:81] duration metric: took 340.4714ms for pod "kube-proxy-7v79z" in "kube-system" namespace to be "Ready" ...
	I0415 19:28:43.631581    2716 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mbmcg" in "kube-system" namespace to be "Ready" ...
	I0415 19:28:43.830671    2716 request.go:629] Waited for 198.7625ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mbmcg
	I0415 19:28:43.830927    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mbmcg
	I0415 19:28:43.830968    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:43.830968    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:43.830968    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:43.835626    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:28:43.835626    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:43.835626    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:43.835626    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:43.835626    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:43.835626    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:43.835996    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:43 GMT
	I0415 19:28:43.835996    2716 round_trippers.go:580]     Audit-Id: 4dbeb3e6-25e6-4b1d-b7ab-1030b696086d
	I0415 19:28:43.836105    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mbmcg","generateName":"kube-proxy-","namespace":"kube-system","uid":"893d185a-0a7b-4fbf-b2d9-824070c9ddd8","resourceVersion":"654","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c07d15d0-ec90-403c-8aa0-1c81c17e9eec","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c07d15d0-ec90-403c-8aa0-1c81c17e9eec\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5836 chars]
	I0415 19:28:44.020840    2716 request.go:629] Waited for 184.0027ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:44.021037    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:44.021118    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:44.021160    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:44.021181    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:44.025831    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:28:44.025831    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:44.025831    2716 round_trippers.go:580]     Audit-Id: 36bdbede-c929-4d81-b85f-afc4195d0e85
	I0415 19:28:44.026083    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:44.026083    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:44.026083    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:44.026083    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:44.026083    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:44 GMT
	I0415 19:28:44.026449    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"666","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3270 chars]
	I0415 19:28:44.026937    2716 pod_ready.go:92] pod "kube-proxy-mbmcg" in "kube-system" namespace has status "Ready":"True"
	I0415 19:28:44.027018    2716 pod_ready.go:81] duration metric: took 395.4337ms for pod "kube-proxy-mbmcg" in "kube-system" namespace to be "Ready" ...
	I0415 19:28:44.027018    2716 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-841000" in "kube-system" namespace to be "Ready" ...
	I0415 19:28:44.224930    2716 request.go:629] Waited for 197.8194ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-841000
	I0415 19:28:44.225386    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-841000
	I0415 19:28:44.225437    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:44.225437    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:44.225437    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:44.229597    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:44.229597    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:44.229597    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:44.229597    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:44.229597    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:44 GMT
	I0415 19:28:44.229597    2716 round_trippers.go:580]     Audit-Id: fbe26691-9718-44fc-9f96-1c2c3f5dca72
	I0415 19:28:44.229597    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:44.229597    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:44.230507    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-841000","namespace":"kube-system","uid":"67374ab1-2ea0-4b43-82b8-1b666d274f2f","resourceVersion":"418","creationTimestamp":"2024-04-15T19:24:59Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4a04a4641e7860cc5b6e00042829e3c0","kubernetes.io/config.mirror":"4a04a4641e7860cc5b6e00042829e3c0","kubernetes.io/config.seen":"2024-04-15T19:24:59.013468170Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:24:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0415 19:28:44.428468    2716 request.go:629] Waited for 197.7739ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:28:44.428599    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:28:44.428599    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:44.428599    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:44.428657    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:44.434813    2716 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 19:28:44.434813    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:44.434813    2716 round_trippers.go:580]     Audit-Id: ec1e40c8-cdf3-48d7-be78-0e49766d2cd7
	I0415 19:28:44.434813    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:44.434813    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:44.434813    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:44.434813    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:44.434813    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:44 GMT
	I0415 19:28:44.434813    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"465","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4966 chars]
	I0415 19:28:44.435497    2716 pod_ready.go:92] pod "kube-scheduler-multinode-841000" in "kube-system" namespace has status "Ready":"True"
	I0415 19:28:44.435497    2716 pod_ready.go:81] duration metric: took 408.4753ms for pod "kube-scheduler-multinode-841000" in "kube-system" namespace to be "Ready" ...
	I0415 19:28:44.435497    2716 pod_ready.go:38] duration metric: took 1.211109s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 19:28:44.435497    2716 system_svc.go:44] waiting for kubelet service to be running ....
	I0415 19:28:44.450350    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 19:28:44.476836    2716 system_svc.go:56] duration metric: took 41.3388ms WaitForService to wait for kubelet
	I0415 19:28:44.476836    2716 kubeadm.go:576] duration metric: took 18.0752139s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 19:28:44.477785    2716 node_conditions.go:102] verifying NodePressure condition ...
	I0415 19:28:44.632396    2716 request.go:629] Waited for 154.2957ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.237:8443/api/v1/nodes
	I0415 19:28:44.632483    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes
	I0415 19:28:44.632483    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:44.632483    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:44.632483    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:44.638407    2716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 19:28:44.638407    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:44.638555    2716 round_trippers.go:580]     Audit-Id: dd4cc3b7-0be6-4412-8032-f98466538598
	I0415 19:28:44.638578    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:44.638603    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:44.638603    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:44.638603    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:44.638648    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:44 GMT
	I0415 19:28:44.638697    2716 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"670"},"items":[{"metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"465","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"mana
gedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1" [truncated 9281 chars]
	I0415 19:28:44.639417    2716 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0415 19:28:44.639417    2716 node_conditions.go:123] node cpu capacity is 2
	I0415 19:28:44.639417    2716 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0415 19:28:44.639417    2716 node_conditions.go:123] node cpu capacity is 2
	I0415 19:28:44.639417    2716 node_conditions.go:105] duration metric: took 161.6306ms to run NodePressure ...
	I0415 19:28:44.639417    2716 start.go:240] waiting for startup goroutines ...
	I0415 19:28:44.639979    2716 start.go:254] writing updated cluster config ...
	I0415 19:28:44.655358    2716 ssh_runner.go:195] Run: rm -f paused
	I0415 19:28:44.820600    2716 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0415 19:28:44.830125    2716 out.go:177] * Done! kubectl is now configured to use "multinode-841000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 15 19:25:27 multinode-841000 dockerd[1334]: time="2024-04-15T19:25:27.384812714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 19:25:27 multinode-841000 dockerd[1334]: time="2024-04-15T19:25:27.417423737Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 15 19:25:27 multinode-841000 dockerd[1334]: time="2024-04-15T19:25:27.417984746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 19:25:27 multinode-841000 dockerd[1334]: time="2024-04-15T19:25:27.418092747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 19:25:27 multinode-841000 dockerd[1334]: time="2024-04-15T19:25:27.418927361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 19:25:27 multinode-841000 cri-dockerd[1235]: time="2024-04-15T19:25:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8e500689099dfc8c7a465e9a83f5b07ab9fd6d4fb1c8392c2840d28da99859ae/resolv.conf as [nameserver 172.19.48.1]"
	Apr 15 19:25:27 multinode-841000 cri-dockerd[1235]: time="2024-04-15T19:25:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/eaba3da43a79559e3185ee94e6f74fdc85cf370297d7600f5b64111b2f002b5e/resolv.conf as [nameserver 172.19.48.1]"
	Apr 15 19:25:27 multinode-841000 dockerd[1334]: time="2024-04-15T19:25:27.843113167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 15 19:25:27 multinode-841000 dockerd[1334]: time="2024-04-15T19:25:27.843201869Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 19:25:27 multinode-841000 dockerd[1334]: time="2024-04-15T19:25:27.843222470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 19:25:27 multinode-841000 dockerd[1334]: time="2024-04-15T19:25:27.843432375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 19:25:27 multinode-841000 dockerd[1334]: time="2024-04-15T19:25:27.845331519Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 15 19:25:27 multinode-841000 dockerd[1334]: time="2024-04-15T19:25:27.845486623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 19:25:27 multinode-841000 dockerd[1334]: time="2024-04-15T19:25:27.845504223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 19:25:27 multinode-841000 dockerd[1334]: time="2024-04-15T19:25:27.845878532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 19:29:11 multinode-841000 dockerd[1334]: time="2024-04-15T19:29:11.971951741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 15 19:29:11 multinode-841000 dockerd[1334]: time="2024-04-15T19:29:11.972243544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 19:29:11 multinode-841000 dockerd[1334]: time="2024-04-15T19:29:11.972268944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 19:29:11 multinode-841000 dockerd[1334]: time="2024-04-15T19:29:11.973312853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 19:29:12 multinode-841000 cri-dockerd[1235]: time="2024-04-15T19:29:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3830cdbfba8a40c644fcba4f515494e825b7b2f795c752165479000bcabc8533/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 15 19:29:13 multinode-841000 cri-dockerd[1235]: time="2024-04-15T19:29:13Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 15 19:29:13 multinode-841000 dockerd[1334]: time="2024-04-15T19:29:13.538138188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 15 19:29:13 multinode-841000 dockerd[1334]: time="2024-04-15T19:29:13.538328490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 19:29:13 multinode-841000 dockerd[1334]: time="2024-04-15T19:29:13.538353390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 19:29:13 multinode-841000 dockerd[1334]: time="2024-04-15T19:29:13.538496891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	89943bb7b3d8d       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   51 seconds ago      Running             busybox                   0                   3830cdbfba8a4       busybox-7fdf7869d9-gkn8h
	023c483d6cc6b       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   0                   8e500689099df       coredns-76f75df574-vqqtx
	13b8950243469       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   eaba3da43a795       storage-provisioner
	6ed282cec4581       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              4 minutes ago       Running             kindnet-cni               0                   0eee7b8b55814       kindnet-zrzd6
	cc8a027d4211d       a1d263b5dc5b0                                                                                         4 minutes ago       Running             kube-proxy                0                   433adb937eeae       kube-proxy-7v79z
	8d334a05315f6       8c390d98f50c0                                                                                         5 minutes ago       Running             kube-scheduler            0                   a4fe4cd1aa4c5       kube-scheduler-multinode-841000
	af7b5d2bf03e6       6052a25da3f97                                                                                         5 minutes ago       Running             kube-controller-manager   0                   58667570745a9       kube-controller-manager-multinode-841000
	6867880d79723       39f995c9f1996                                                                                         5 minutes ago       Running             kube-apiserver            0                   b367a28f9f2e7       kube-apiserver-multinode-841000
	230daf2c59cd5       3861cfcd7c04c                                                                                         5 minutes ago       Running             etcd                      0                   ff71106bb6df0       etcd-multinode-841000
	
	
	==> coredns [023c483d6cc6] <==
	[INFO] 10.244.0.3:49964 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000208702s
	[INFO] 10.244.1.2:60232 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000278402s
	[INFO] 10.244.1.2:51509 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000287102s
	[INFO] 10.244.1.2:52348 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141101s
	[INFO] 10.244.1.2:34223 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000185102s
	[INFO] 10.244.1.2:50171 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000172102s
	[INFO] 10.244.1.2:47185 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000249302s
	[INFO] 10.244.1.2:44434 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000222602s
	[INFO] 10.244.1.2:32889 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000157502s
	[INFO] 10.244.0.3:39242 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000220802s
	[INFO] 10.244.0.3:56718 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000238002s
	[INFO] 10.244.0.3:33231 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000967s
	[INFO] 10.244.0.3:52683 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060101s
	[INFO] 10.244.1.2:59976 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134601s
	[INFO] 10.244.1.2:45235 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000302602s
	[INFO] 10.244.1.2:35171 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000260702s
	[INFO] 10.244.1.2:48805 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000762s
	[INFO] 10.244.0.3:60616 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000246402s
	[INFO] 10.244.0.3:36380 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000114301s
	[INFO] 10.244.0.3:47182 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000090801s
	[INFO] 10.244.0.3:55760 - 5 "PTR IN 1.48.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000092601s
	[INFO] 10.244.1.2:35347 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146601s
	[INFO] 10.244.1.2:56464 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000288102s
	[INFO] 10.244.1.2:54660 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000709s
	[INFO] 10.244.1.2:43202 - 5 "PTR IN 1.48.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000058601s
	
	
	==> describe nodes <==
	Name:               multinode-841000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-841000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c
	                    minikube.k8s.io/name=multinode-841000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_15T19_24_59_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 19:24:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-841000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Apr 2024 19:29:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Apr 2024 19:29:34 +0000   Mon, 15 Apr 2024 19:24:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Apr 2024 19:29:34 +0000   Mon, 15 Apr 2024 19:24:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Apr 2024 19:29:34 +0000   Mon, 15 Apr 2024 19:24:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Apr 2024 19:29:34 +0000   Mon, 15 Apr 2024 19:25:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.62.237
	  Hostname:    multinode-841000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 f7c14b12674c41e0878785eed7d197fc
	  System UUID:                4a57c417-cda2-a24a-90d7-fc6ccd0391d4
	  Boot ID:                    0f92915c-52b2-4e4c-acc7-87e8e0ff34dd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-gkn8h                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 coredns-76f75df574-vqqtx                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m52s
	  kube-system                 etcd-multinode-841000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m7s
	  kube-system                 kindnet-zrzd6                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m52s
	  kube-system                 kube-apiserver-multinode-841000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 kube-controller-manager-multinode-841000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 kube-proxy-7v79z                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  kube-system                 kube-scheduler-multinode-841000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m49s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m15s (x8 over 5m15s)  kubelet          Node multinode-841000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m15s (x8 over 5m15s)  kubelet          Node multinode-841000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m15s (x7 over 5m15s)  kubelet          Node multinode-841000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m5s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m5s                   kubelet          Node multinode-841000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m5s                   kubelet          Node multinode-841000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m5s                   kubelet          Node multinode-841000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m5s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m52s                  node-controller  Node multinode-841000 event: Registered Node multinode-841000 in Controller
	  Normal  NodeReady                4m38s                  kubelet          Node multinode-841000 status is now: NodeReady
	
	
	Name:               multinode-841000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-841000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c
	                    minikube.k8s.io/name=multinode-841000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_15T19_28_26_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 19:28:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-841000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Apr 2024 19:29:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Apr 2024 19:29:26 +0000   Mon, 15 Apr 2024 19:28:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Apr 2024 19:29:26 +0000   Mon, 15 Apr 2024 19:28:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Apr 2024 19:29:26 +0000   Mon, 15 Apr 2024 19:28:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Apr 2024 19:29:26 +0000   Mon, 15 Apr 2024 19:28:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.55.167
	  Hostname:    multinode-841000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 371c8e1d12f1450088f192415d94b9af
	  System UUID:                740c74a4-1425-a745-bde4-543f010981ea
	  Boot ID:                    263a0e94-df3a-46b2-99db-47f12924e038
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-hfpk6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 kindnet-2cgqg               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      100s
	  kube-system                 kube-proxy-mbmcg            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 88s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  100s (x2 over 100s)  kubelet          Node multinode-841000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    100s (x2 over 100s)  kubelet          Node multinode-841000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     100s (x2 over 100s)  kubelet          Node multinode-841000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  100s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           97s                  node-controller  Node multinode-841000-m02 event: Registered Node multinode-841000-m02 in Controller
	  Normal  NodeReady                82s                  kubelet          Node multinode-841000-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr15 19:23] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.201130] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[Apr15 19:24] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.134096] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.670575] systemd-fstab-generator[989]: Ignoring "noauto" option for root device
	[  +0.220137] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +0.266682] systemd-fstab-generator[1015]: Ignoring "noauto" option for root device
	[  +2.959731] systemd-fstab-generator[1188]: Ignoring "noauto" option for root device
	[  +0.248545] systemd-fstab-generator[1200]: Ignoring "noauto" option for root device
	[  +0.216396] systemd-fstab-generator[1213]: Ignoring "noauto" option for root device
	[  +0.321934] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +0.112117] kauditd_printk_skb: 183 callbacks suppressed
	[ +11.861909] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[  +0.125408] kauditd_printk_skb: 12 callbacks suppressed
	[  +3.369882] systemd-fstab-generator[1518]: Ignoring "noauto" option for root device
	[  +6.857961] systemd-fstab-generator[1719]: Ignoring "noauto" option for root device
	[  +0.117218] kauditd_printk_skb: 73 callbacks suppressed
	[  +9.878328] systemd-fstab-generator[2130]: Ignoring "noauto" option for root device
	[  +0.155563] kauditd_printk_skb: 62 callbacks suppressed
	[Apr15 19:25] systemd-fstab-generator[2318]: Ignoring "noauto" option for root device
	[  +0.165691] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.054875] kauditd_printk_skb: 51 callbacks suppressed
	[Apr15 19:29] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [230daf2c59cd] <==
	{"level":"info","ts":"2024-04-15T19:24:51.188939Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T19:24:51.188118Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"15816a25df1d9b0c","local-member-attributes":"{Name:multinode-841000 ClientURLs:[https://172.19.62.237:2379]}","request-path":"/0/members/15816a25df1d9b0c/attributes","cluster-id":"2db538078503edda","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-15T19:24:51.205548Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.62.237:2379"}
	{"level":"info","ts":"2024-04-15T19:24:51.212275Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-15T19:24:51.22Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-15T19:24:51.243052Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-15T19:24:51.244185Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-15T19:24:51.252667Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2db538078503edda","local-member-id":"15816a25df1d9b0c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T19:24:51.258991Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T19:24:51.259329Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T19:25:19.509098Z","caller":"traceutil/trace.go:171","msg":"trace[1138436874] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"361.380408ms","start":"2024-04-15T19:25:19.147703Z","end":"2024-04-15T19:25:19.509084Z","steps":["trace[1138436874] 'process raft request'  (duration: 360.382281ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T19:25:19.508233Z","caller":"traceutil/trace.go:171","msg":"trace[1629984653] linearizableReadLoop","detail":"{readStateIndex:433; appliedIndex:432; }","duration":"333.14765ms","start":"2024-04-15T19:25:19.175061Z","end":"2024-04-15T19:25:19.508209Z","steps":["trace[1629984653] 'read index received'  (duration: 332.876242ms)","trace[1629984653] 'applied index is now lower than readState.Index'  (duration: 270.808µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-15T19:25:19.512401Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"337.307761ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-841000\" ","response":"range_response_count:1 size:4493"}
	{"level":"info","ts":"2024-04-15T19:25:19.512447Z","caller":"traceutil/trace.go:171","msg":"trace[29021971] range","detail":"{range_begin:/registry/minions/multinode-841000; range_end:; response_count:1; response_revision:417; }","duration":"337.404164ms","start":"2024-04-15T19:25:19.17503Z","end":"2024-04-15T19:25:19.512434Z","steps":["trace[29021971] 'agreement among raft nodes before linearized reading'  (duration: 337.302461ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T19:25:19.512473Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T19:25:19.175014Z","time spent":"337.451365ms","remote":"127.0.0.1:58364","response type":"/etcdserverpb.KV/Range","request count":0,"request size":36,"response count":1,"response size":4515,"request content":"key:\"/registry/minions/multinode-841000\" "}
	{"level":"warn","ts":"2024-04-15T19:25:19.513214Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T19:25:19.147689Z","time spent":"361.48011ms","remote":"127.0.0.1:58370","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6345,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-multinode-841000\" mod_revision:344 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-multinode-841000\" value_size:6270 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-multinode-841000\" > >"}
	{"level":"info","ts":"2024-04-15T19:25:19.660109Z","caller":"traceutil/trace.go:171","msg":"trace[1753116208] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"125.984485ms","start":"2024-04-15T19:25:19.534106Z","end":"2024-04-15T19:25:19.660091Z","steps":["trace[1753116208] 'process raft request'  (duration: 122.699397ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T19:28:35.501239Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.523494ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11172461882149939846 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/multinode-841000-m02\" mod_revision:638 > success:<request_put:<key:\"/registry/minions/multinode-841000-m02\" value_size:3094 >> failure:<request_range:<key:\"/registry/minions/multinode-841000-m02\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-15T19:28:35.501712Z","caller":"traceutil/trace.go:171","msg":"trace[1484539855] linearizableReadLoop","detail":"{readStateIndex:706; appliedIndex:705; }","duration":"281.180652ms","start":"2024-04-15T19:28:35.220518Z","end":"2024-04-15T19:28:35.501699Z","steps":["trace[1484539855] 'read index received'  (duration: 126.72155ms)","trace[1484539855] 'applied index is now lower than readState.Index'  (duration: 154.458102ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-15T19:28:35.501814Z","caller":"traceutil/trace.go:171","msg":"trace[493452803] transaction","detail":"{read_only:false; response_revision:649; number_of_response:1; }","duration":"318.878595ms","start":"2024-04-15T19:28:35.182922Z","end":"2024-04-15T19:28:35.501801Z","steps":["trace[493452803] 'process raft request'  (duration: 164.495793ms)","trace[493452803] 'compare'  (duration: 153.441793ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-15T19:28:35.501909Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"281.366353ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-841000-m02\" ","response":"range_response_count:1 size:3155"}
	{"level":"info","ts":"2024-04-15T19:28:35.502666Z","caller":"traceutil/trace.go:171","msg":"trace[138218994] range","detail":"{range_begin:/registry/minions/multinode-841000-m02; range_end:; response_count:1; response_revision:649; }","duration":"282.168962ms","start":"2024-04-15T19:28:35.220484Z","end":"2024-04-15T19:28:35.502653Z","steps":["trace[138218994] 'agreement among raft nodes before linearized reading'  (duration: 281.356754ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T19:28:35.503288Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T19:28:35.182904Z","time spent":"319.4456ms","remote":"127.0.0.1:58364","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3140,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-841000-m02\" mod_revision:638 > success:<request_put:<key:\"/registry/minions/multinode-841000-m02\" value_size:3094 >> failure:<request_range:<key:\"/registry/minions/multinode-841000-m02\" > >"}
	{"level":"warn","ts":"2024-04-15T19:28:35.851779Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.868897ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-841000-m02\" ","response":"range_response_count:1 size:3155"}
	{"level":"info","ts":"2024-04-15T19:28:35.851842Z","caller":"traceutil/trace.go:171","msg":"trace[2043337120] range","detail":"{range_begin:/registry/minions/multinode-841000-m02; range_end:; response_count:1; response_revision:649; }","duration":"131.979298ms","start":"2024-04-15T19:28:35.719848Z","end":"2024-04-15T19:28:35.851827Z","steps":["trace[2043337120] 'range keys from in-memory index tree'  (duration: 131.754496ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:30:04 up 7 min,  0 users,  load average: 0.16, 0.21, 0.11
	Linux multinode-841000 5.10.207 #1 SMP Mon Apr 15 15:01:07 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6ed282cec458] <==
	I0415 19:29:02.108272       1 main.go:250] Node multinode-841000-m02 has CIDR [10.244.1.0/24] 
	I0415 19:29:12.114818       1 main.go:223] Handling node with IPs: map[172.19.62.237:{}]
	I0415 19:29:12.114849       1 main.go:227] handling current node
	I0415 19:29:12.114861       1 main.go:223] Handling node with IPs: map[172.19.55.167:{}]
	I0415 19:29:12.114883       1 main.go:250] Node multinode-841000-m02 has CIDR [10.244.1.0/24] 
	I0415 19:29:22.121231       1 main.go:223] Handling node with IPs: map[172.19.62.237:{}]
	I0415 19:29:22.121336       1 main.go:227] handling current node
	I0415 19:29:22.121363       1 main.go:223] Handling node with IPs: map[172.19.55.167:{}]
	I0415 19:29:22.121372       1 main.go:250] Node multinode-841000-m02 has CIDR [10.244.1.0/24] 
	I0415 19:29:32.135521       1 main.go:223] Handling node with IPs: map[172.19.62.237:{}]
	I0415 19:29:32.135763       1 main.go:227] handling current node
	I0415 19:29:32.135840       1 main.go:223] Handling node with IPs: map[172.19.55.167:{}]
	I0415 19:29:32.135852       1 main.go:250] Node multinode-841000-m02 has CIDR [10.244.1.0/24] 
	I0415 19:29:42.143574       1 main.go:223] Handling node with IPs: map[172.19.62.237:{}]
	I0415 19:29:42.143692       1 main.go:227] handling current node
	I0415 19:29:42.143707       1 main.go:223] Handling node with IPs: map[172.19.55.167:{}]
	I0415 19:29:42.143716       1 main.go:250] Node multinode-841000-m02 has CIDR [10.244.1.0/24] 
	I0415 19:29:52.160349       1 main.go:223] Handling node with IPs: map[172.19.62.237:{}]
	I0415 19:29:52.160402       1 main.go:227] handling current node
	I0415 19:29:52.160416       1 main.go:223] Handling node with IPs: map[172.19.55.167:{}]
	I0415 19:29:52.160424       1 main.go:250] Node multinode-841000-m02 has CIDR [10.244.1.0/24] 
	I0415 19:30:02.175881       1 main.go:223] Handling node with IPs: map[172.19.62.237:{}]
	I0415 19:30:02.175984       1 main.go:227] handling current node
	I0415 19:30:02.176022       1 main.go:223] Handling node with IPs: map[172.19.55.167:{}]
	I0415 19:30:02.176031       1 main.go:250] Node multinode-841000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [6867880d7972] <==
	I0415 19:24:54.576869       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0415 19:24:54.579471       1 shared_informer.go:318] Caches are synced for configmaps
	I0415 19:24:54.585111       1 controller.go:624] quota admission added evaluator for: namespaces
	I0415 19:24:54.585752       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0415 19:24:54.590011       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0415 19:24:54.590212       1 aggregator.go:165] initial CRD sync complete...
	I0415 19:24:54.590302       1 autoregister_controller.go:141] Starting autoregister controller
	I0415 19:24:54.590335       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0415 19:24:54.590354       1 cache.go:39] Caches are synced for autoregister controller
	I0415 19:24:54.648456       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0415 19:24:55.399123       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0415 19:24:55.408471       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0415 19:24:55.408837       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0415 19:24:56.694357       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0415 19:24:56.790361       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0415 19:24:56.982171       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0415 19:24:56.995853       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.62.237]
	I0415 19:24:56.997482       1 controller.go:624] quota admission added evaluator for: endpoints
	I0415 19:24:57.010333       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0415 19:24:57.503685       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0415 19:24:58.933886       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0415 19:24:58.968275       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0415 19:24:58.996233       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0415 19:25:12.001163       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0415 19:25:12.223394       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [af7b5d2bf03e] <==
	I0415 19:25:13.347872       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="80.203µs"
	I0415 19:25:26.756894       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="152.902µs"
	I0415 19:25:26.793413       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="69.901µs"
	I0415 19:25:27.056204       1 node_lifecycle_controller.go:1045] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0415 19:25:28.068564       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="108.004µs"
	I0415 19:25:29.098260       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="28.002782ms"
	I0415 19:25:29.100787       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="78.203µs"
	I0415 19:28:24.780829       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-841000-m02\" does not exist"
	I0415 19:28:24.794199       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-841000-m02" podCIDRs=["10.244.1.0/24"]
	I0415 19:28:24.814169       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mbmcg"
	I0415 19:28:24.823163       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2cgqg"
	I0415 19:28:27.099887       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-841000-m02"
	I0415 19:28:27.100265       1 event.go:376] "Event occurred" object="multinode-841000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-841000-m02 event: Registered Node multinode-841000-m02 in Controller"
	I0415 19:28:42.950027       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-841000-m02"
	I0415 19:29:11.301858       1 event.go:376] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7fdf7869d9 to 2"
	I0415 19:29:11.329527       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-hfpk6"
	I0415 19:29:11.369131       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-gkn8h"
	I0415 19:29:11.379268       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="78.242453ms"
	I0415 19:29:11.422629       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="42.798757ms"
	I0415 19:29:11.455694       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="32.400371ms"
	I0415 19:29:11.456533       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="672.706µs"
	I0415 19:29:14.226778       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="19.663563ms"
	I0415 19:29:14.227467       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="166.401µs"
	I0415 19:29:14.304800       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="8.761072ms"
	I0415 19:29:14.306468       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="150.201µs"
	
	
	==> kube-proxy [cc8a027d4211] <==
	I0415 19:25:14.944883       1 server_others.go:72] "Using iptables proxy"
	I0415 19:25:14.961420       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.62.237"]
	I0415 19:25:15.076544       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0415 19:25:15.076703       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0415 19:25:15.076723       1 server_others.go:168] "Using iptables Proxier"
	I0415 19:25:15.081239       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0415 19:25:15.082383       1 server.go:865] "Version info" version="v1.29.3"
	I0415 19:25:15.082420       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0415 19:25:15.083884       1 config.go:188] "Starting service config controller"
	I0415 19:25:15.083932       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0415 19:25:15.084121       1 config.go:97] "Starting endpoint slice config controller"
	I0415 19:25:15.084201       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0415 19:25:15.087448       1 config.go:315] "Starting node config controller"
	I0415 19:25:15.087481       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0415 19:25:15.185348       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0415 19:25:15.185460       1 shared_informer.go:318] Caches are synced for service config
	I0415 19:25:15.188983       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [8d334a05315f] <==
	W0415 19:24:55.501678       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0415 19:24:55.501880       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0415 19:24:55.675925       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0415 19:24:55.676265       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0415 19:24:55.754252       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0415 19:24:55.754425       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0415 19:24:55.847516       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0415 19:24:55.847572       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0415 19:24:55.851092       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0415 19:24:55.851140       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0415 19:24:55.861466       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0415 19:24:55.861820       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0415 19:24:55.954178       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0415 19:24:55.954371       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0415 19:24:55.959844       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0415 19:24:55.960089       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0415 19:24:56.041986       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0415 19:24:56.042536       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0415 19:24:56.071137       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0415 19:24:56.071929       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0415 19:24:56.110763       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0415 19:24:56.111230       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0415 19:24:56.172830       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0415 19:24:56.173223       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0415 19:24:57.859636       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 15 19:25:59 multinode-841000 kubelet[2137]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 19:25:59 multinode-841000 kubelet[2137]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 19:26:59 multinode-841000 kubelet[2137]: E0415 19:26:59.182813    2137 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 19:26:59 multinode-841000 kubelet[2137]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 19:26:59 multinode-841000 kubelet[2137]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 19:26:59 multinode-841000 kubelet[2137]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 19:26:59 multinode-841000 kubelet[2137]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 19:27:59 multinode-841000 kubelet[2137]: E0415 19:27:59.184582    2137 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 19:27:59 multinode-841000 kubelet[2137]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 19:27:59 multinode-841000 kubelet[2137]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 19:27:59 multinode-841000 kubelet[2137]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 19:27:59 multinode-841000 kubelet[2137]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 19:28:59 multinode-841000 kubelet[2137]: E0415 19:28:59.183391    2137 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 19:28:59 multinode-841000 kubelet[2137]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 19:28:59 multinode-841000 kubelet[2137]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 19:28:59 multinode-841000 kubelet[2137]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 19:28:59 multinode-841000 kubelet[2137]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 19:29:11 multinode-841000 kubelet[2137]: I0415 19:29:11.386361    2137 topology_manager.go:215] "Topology Admit Handler" podUID="b77c41f6-9299-4dce-8630-f4a06ef00e04" podNamespace="default" podName="busybox-7fdf7869d9-gkn8h"
	Apr 15 19:29:11 multinode-841000 kubelet[2137]: I0415 19:29:11.513812    2137 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdhhl\" (UniqueName: \"kubernetes.io/projected/b77c41f6-9299-4dce-8630-f4a06ef00e04-kube-api-access-rdhhl\") pod \"busybox-7fdf7869d9-gkn8h\" (UID: \"b77c41f6-9299-4dce-8630-f4a06ef00e04\") " pod="default/busybox-7fdf7869d9-gkn8h"
	Apr 15 19:29:14 multinode-841000 kubelet[2137]: I0415 19:29:14.295303    2137 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-7fdf7869d9-gkn8h" podStartSLOduration=2.181658762 podStartE2EDuration="3.295255251s" podCreationTimestamp="2024-04-15 19:29:11 +0000 UTC" firstStartedPulling="2024-04-15 19:29:12.178768366 +0000 UTC m=+253.297322262" lastFinishedPulling="2024-04-15 19:29:13.292364755 +0000 UTC m=+254.410918751" observedRunningTime="2024-04-15 19:29:14.294529645 +0000 UTC m=+255.413083641" watchObservedRunningTime="2024-04-15 19:29:14.295255251 +0000 UTC m=+255.413809147"
	Apr 15 19:29:59 multinode-841000 kubelet[2137]: E0415 19:29:59.185032    2137 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 19:29:59 multinode-841000 kubelet[2137]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 19:29:59 multinode-841000 kubelet[2137]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 19:29:59 multinode-841000 kubelet[2137]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 19:29:59 multinode-841000 kubelet[2137]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 19:29:56.028873    6044 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-841000 -n multinode-841000
E0415 19:30:10.542038   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-841000 -n multinode-841000: (13.1982249s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-841000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (59.75s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (298.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 node start m03 -v=7 --alsologtostderr
E0415 19:43:13.775629   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
E0415 19:45:10.550967   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-841000 node start m03 -v=7 --alsologtostderr: exit status 90 (3m2.5270051s)

                                                
                                                
-- stdout --
	* Starting "multinode-841000-m03" worker node in "multinode-841000" cluster
	* Restarting existing hyperv VM for "multinode-841000-m03" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 19:42:33.112709    2004 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0415 19:42:33.203280    2004 out.go:291] Setting OutFile to fd 892 ...
	I0415 19:42:33.219482    2004 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 19:42:33.219482    2004 out.go:304] Setting ErrFile to fd 828...
	I0415 19:42:33.219482    2004 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 19:42:33.235839    2004 mustload.go:65] Loading cluster: multinode-841000
	I0415 19:42:33.236661    2004 config.go:182] Loaded profile config "multinode-841000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 19:42:33.238337    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
	I0415 19:42:35.569660    2004 main.go:141] libmachine: [stdout =====>] : Off
	
	I0415 19:42:35.569660    2004 main.go:141] libmachine: [stderr =====>] : 
	W0415 19:42:35.569660    2004 host.go:58] "multinode-841000-m03" host status: Stopped
	I0415 19:42:35.572459    2004 out.go:177] * Starting "multinode-841000-m03" worker node in "multinode-841000" cluster
	I0415 19:42:35.574710    2004 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 19:42:35.574710    2004 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 19:42:35.575234    2004 cache.go:56] Caching tarball of preloaded images
	I0415 19:42:35.575488    2004 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 19:42:35.575488    2004 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 19:42:35.576110    2004 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\config.json ...
	I0415 19:42:35.578242    2004 start.go:360] acquireMachinesLock for multinode-841000-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 19:42:35.578765    2004 start.go:364] duration metric: took 522.5µs to acquireMachinesLock for "multinode-841000-m03"
	I0415 19:42:35.578928    2004 start.go:96] Skipping create...Using existing machine configuration
	I0415 19:42:35.578928    2004 fix.go:54] fixHost starting: m03
	I0415 19:42:35.579675    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
	I0415 19:42:37.862784    2004 main.go:141] libmachine: [stdout =====>] : Off
	
	I0415 19:42:37.862784    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:42:37.863145    2004 fix.go:112] recreateIfNeeded on multinode-841000-m03: state=Stopped err=<nil>
	W0415 19:42:37.863145    2004 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 19:42:37.865644    2004 out.go:177] * Restarting existing hyperv VM for "multinode-841000-m03" ...
	I0415 19:42:37.870175    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-841000-m03
	I0415 19:42:41.152716    2004 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:42:41.152884    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:42:41.152935    2004 main.go:141] libmachine: Waiting for host to start...
	I0415 19:42:41.153084    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
	I0415 19:42:43.597342    2004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:42:43.597815    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:42:43.598126    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
	I0415 19:42:46.254388    2004 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:42:46.255030    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:42:47.256517    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
	I0415 19:42:49.589568    2004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:42:49.590513    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:42:49.590513    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
	I0415 19:42:52.332251    2004 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:42:52.332324    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:42:53.334962    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
	I0415 19:42:55.724147    2004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:42:55.724147    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:42:55.725049    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
	I0415 19:42:58.483296    2004 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:42:58.483345    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:42:59.490349    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
	I0415 19:43:01.991809    2004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:43:01.991809    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:43:01.992326    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
	I0415 19:43:04.696744    2004 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:43:04.696744    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:43:05.702893    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
	I0415 19:43:08.093270    2004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:43:08.094279    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:43:08.094372    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
	I0415 19:43:10.930189    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34
	
	I0415 19:43:10.931163    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:43:10.934232    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
	I0415 19:43:13.281096    2004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:43:13.281096    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:43:13.282088    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
	I0415 19:43:16.086258    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34
	
	I0415 19:43:16.086258    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:43:16.086773    2004 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\config.json ...
	I0415 19:43:16.089898    2004 machine.go:94] provisionDockerMachine start ...
	I0415 19:43:16.089898    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
	I0415 19:43:18.446820    2004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:43:18.446820    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:43:18.447126    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
	I0415 19:43:21.175668    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34
	
	I0415 19:43:21.175668    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:43:21.184108    2004 main.go:141] libmachine: Using SSH client type: native
	I0415 19:43:21.185149    2004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.52.34 22 <nil> <nil>}
	I0415 19:43:21.185149    2004 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 19:43:21.322813    2004 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0415 19:43:21.322951    2004 buildroot.go:166] provisioning hostname "multinode-841000-m03"
	I0415 19:43:21.323085    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
	I0415 19:43:23.620284    2004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:43:23.621080    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:43:23.621175    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
	I0415 19:43:26.432019    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34
	
	I0415 19:43:26.432065    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:43:26.438546    2004 main.go:141] libmachine: Using SSH client type: native
	I0415 19:43:26.439300    2004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.52.34 22 <nil> <nil>}
	I0415 19:43:26.439374    2004 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-841000-m03 && echo "multinode-841000-m03" | sudo tee /etc/hostname
	I0415 19:43:26.598123    2004 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-841000-m03
	
	I0415 19:43:26.598123    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
	I0415 19:43:28.921131    2004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:43:28.921131    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:43:28.921652    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
	I0415 19:43:31.675928    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34
	
	I0415 19:43:31.675928    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:43:31.682025    2004 main.go:141] libmachine: Using SSH client type: native
	I0415 19:43:31.682744    2004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.52.34 22 <nil> <nil>}
	I0415 19:43:31.682744    2004 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-841000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-841000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-841000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 19:43:31.822727    2004 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 19:43:31.822727    2004 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0415 19:43:31.823303    2004 buildroot.go:174] setting up certificates
	I0415 19:43:31.823359    2004 provision.go:84] configureAuth start
	I0415 19:43:31.823423    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
	I0415 19:43:34.171282    2004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:43:34.171282    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:43:34.171597    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
	I0415 19:43:36.994443    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34
	
	I0415 19:43:36.994630    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:43:36.994810    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
	I0415 19:43:39.350090    2004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:43:39.350090    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:43:39.350090    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
	I0415 19:43:42.135294    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34
	
	I0415 19:43:42.136077    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:43:42.136077    2004 provision.go:143] copyHostCerts
	I0415 19:43:42.136343    2004 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0415 19:43:42.136667    2004 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0415 19:43:42.136667    2004 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0415 19:43:42.137311    2004 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0415 19:43:42.138744    2004 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0415 19:43:42.139036    2004 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0415 19:43:42.139117    2004 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0415 19:43:42.139172    2004 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0415 19:43:42.140232    2004 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0415 19:43:42.140232    2004 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0415 19:43:42.140752    2004 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0415 19:43:42.141005    2004 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0415 19:43:42.141978    2004 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-841000-m03 san=[127.0.0.1 172.19.52.34 localhost minikube multinode-841000-m03]
	I0415 19:43:42.451857    2004 provision.go:177] copyRemoteCerts
	I0415 19:43:42.469375    2004 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 19:43:42.469375    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
	I0415 19:43:44.787640    2004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:43:44.788084    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:43:44.788084    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
	I0415 19:43:47.596004    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34
	
	I0415 19:43:47.596627    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:43:47.596684    2004 sshutil.go:53] new ssh client: &{IP:172.19.52.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m03\id_rsa Username:docker}
	I0415 19:43:47.714046    2004 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.2446294s)
	I0415 19:43:47.714046    2004 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0415 19:43:47.715097    2004 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 19:43:47.767193    2004 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0415 19:43:47.767544    2004 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0415 19:43:47.815392    2004 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0415 19:43:47.816347    2004 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0415 19:43:47.863932    2004 provision.go:87] duration metric: took 16.0404461s to configureAuth
	I0415 19:43:47.864010    2004 buildroot.go:189] setting minikube options for container-runtime
	I0415 19:43:47.864722    2004 config.go:182] Loaded profile config "multinode-841000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 19:43:47.864980    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
	I0415 19:43:50.161606    2004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:43:50.161606    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:43:50.162168    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
	I0415 19:43:52.894259    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34
	
	I0415 19:43:52.894259    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:43:52.901995    2004 main.go:141] libmachine: Using SSH client type: native
	I0415 19:43:52.901995    2004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.52.34 22 <nil> <nil>}
	I0415 19:43:52.901995    2004 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 19:43:53.037895    2004 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0415 19:43:53.037895    2004 buildroot.go:70] root file system type: tmpfs
	I0415 19:43:53.038223    2004 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 19:43:53.038223    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
	I0415 19:43:55.354294    2004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:43:55.354294    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:43:55.354476    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
	I0415 19:43:58.092098    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34
	
	I0415 19:43:58.092098    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:43:58.098248    2004 main.go:141] libmachine: Using SSH client type: native
	I0415 19:43:58.098965    2004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.52.34 22 <nil> <nil>}
	I0415 19:43:58.099053    2004 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 19:43:58.255288    2004 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 19:43:58.255413    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
	I0415 19:44:00.576546    2004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:44:00.576697    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:44:00.576766    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
	I0415 19:44:03.369482    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34
	
	I0415 19:44:03.369482    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:44:03.373274    2004 main.go:141] libmachine: Using SSH client type: native
	I0415 19:44:03.373274    2004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.52.34 22 <nil> <nil>}
	I0415 19:44:03.373274    2004 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 19:44:05.789123    2004 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0415 19:44:05.789123    2004 machine.go:97] duration metric: took 49.6988314s to provisionDockerMachine
	I0415 19:44:05.789123    2004 start.go:293] postStartSetup for "multinode-841000-m03" (driver="hyperv")
	I0415 19:44:05.789123    2004 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 19:44:05.805306    2004 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 19:44:05.805306    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
	I0415 19:44:08.123669    2004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:44:08.123669    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:44:08.124720    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
	I0415 19:44:10.867461    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34
	
	I0415 19:44:10.867461    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:44:10.869050    2004 sshutil.go:53] new ssh client: &{IP:172.19.52.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m03\id_rsa Username:docker}
	I0415 19:44:10.983875    2004 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1785284s)
	I0415 19:44:10.999114    2004 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 19:44:11.009152    2004 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 19:44:11.009152    2004 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0415 19:44:11.010093    2004 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0415 19:44:11.010614    2004 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> 112722.pem in /etc/ssl/certs
	I0415 19:44:11.010614    2004 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /etc/ssl/certs/112722.pem
	I0415 19:44:11.025771    2004 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 19:44:11.045432    2004 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /etc/ssl/certs/112722.pem (1708 bytes)
	I0415 19:44:11.094837    2004 start.go:296] duration metric: took 5.3056717s for postStartSetup
	I0415 19:44:11.094837    2004 fix.go:56] duration metric: took 1m35.5151493s for fixHost
	I0415 19:44:11.094837    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
	I0415 19:44:13.388503    2004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:44:13.388503    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:44:13.388914    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
	I0415 19:44:16.118172    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34
	
	I0415 19:44:16.118871    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:44:16.125313    2004 main.go:141] libmachine: Using SSH client type: native
	I0415 19:44:16.125836    2004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.52.34 22 <nil> <nil>}
	I0415 19:44:16.125836    2004 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0415 19:44:16.253766    2004 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713210256.264179299
	
	I0415 19:44:16.254427    2004 fix.go:216] guest clock: 1713210256.264179299
	I0415 19:44:16.254479    2004 fix.go:229] Guest: 2024-04-15 19:44:16.264179299 +0000 UTC Remote: 2024-04-15 19:44:11.094837 +0000 UTC m=+98.091758001 (delta=5.169342299s)
	I0415 19:44:16.254479    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
	I0415 19:44:18.562134    2004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:44:18.562325    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:44:18.562416    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
	I0415 19:44:21.356693    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34
	
	I0415 19:44:21.356693    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:44:21.363824    2004 main.go:141] libmachine: Using SSH client type: native
	I0415 19:44:21.364564    2004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.52.34 22 <nil> <nil>}
	I0415 19:44:21.364564    2004 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713210256
	I0415 19:44:21.507546    2004 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 15 19:44:16 UTC 2024
	
	I0415 19:44:21.507546    2004 fix.go:236] clock set: Mon Apr 15 19:44:16 UTC 2024
	 (err=<nil>)
	I0415 19:44:21.507546    2004 start.go:83] releasing machines lock for "multinode-841000-m03", held for 1m45.9278438s
	I0415 19:44:21.508229    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
	I0415 19:44:23.904211    2004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:44:23.904211    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:44:23.905112    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
	I0415 19:44:26.722707    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34
	
	I0415 19:44:26.722707    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:44:26.729536    2004 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 19:44:26.729536    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
	I0415 19:44:26.744162    2004 ssh_runner.go:195] Run: systemctl --version
	I0415 19:44:26.744162    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
	I0415 19:44:29.149427    2004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:44:29.149427    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:44:29.149427    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
	I0415 19:44:29.150062    2004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:44:29.150171    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:44:29.150171    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
	I0415 19:44:32.013904    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34
	
	I0415 19:44:32.014237    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:44:32.014295    2004 sshutil.go:53] new ssh client: &{IP:172.19.52.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m03\id_rsa Username:docker}
	I0415 19:44:32.044181    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34
	
	I0415 19:44:32.044181    2004 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:44:32.046597    2004 sshutil.go:53] new ssh client: &{IP:172.19.52.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m03\id_rsa Username:docker}
	I0415 19:44:32.194366    2004 ssh_runner.go:235] Completed: systemctl --version: (5.4497484s)
	I0415 19:44:32.194366    2004 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.464787s)
	I0415 19:44:32.208933    2004 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0415 19:44:32.219180    2004 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 19:44:32.240348    2004 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 19:44:32.274936    2004 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 19:44:32.275141    2004 start.go:494] detecting cgroup driver to use...
	I0415 19:44:32.275141    2004 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 19:44:32.328716    2004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0415 19:44:32.369496    2004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 19:44:32.392183    2004 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 19:44:32.407159    2004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 19:44:32.444437    2004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 19:44:32.480450    2004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 19:44:32.515107    2004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 19:44:32.555800    2004 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 19:44:32.591905    2004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 19:44:32.628414    2004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 19:44:32.670268    2004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 19:44:32.710037    2004 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 19:44:32.749993    2004 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 19:44:32.784700    2004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:44:33.006816    2004 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 19:44:33.042141    2004 start.go:494] detecting cgroup driver to use...
	I0415 19:44:33.056631    2004 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0415 19:44:33.097907    2004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 19:44:33.139065    2004 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 19:44:33.195272    2004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 19:44:33.238313    2004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 19:44:33.280398    2004 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0415 19:44:33.345187    2004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 19:44:33.373595    2004 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 19:44:33.434870    2004 ssh_runner.go:195] Run: which cri-dockerd
	I0415 19:44:33.457267    2004 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0415 19:44:33.478577    2004 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0415 19:44:33.531852    2004 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0415 19:44:33.763080    2004 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0415 19:44:33.963615    2004 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0415 19:44:33.963615    2004 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0415 19:44:34.020637    2004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:44:34.236897    2004 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 19:45:35.386371    2004 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1489854s)
	I0415 19:45:35.400770    2004 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0415 19:45:35.439090    2004 out.go:177] 
	W0415 19:45:35.442120    2004 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 15 19:44:03 multinode-841000-m03 systemd[1]: Starting Docker Application Container Engine...
	Apr 15 19:44:04 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:04.059316558Z" level=info msg="Starting up"
	Apr 15 19:44:04 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:04.061777110Z" level=info msg="containerd not running, starting managed containerd"
	Apr 15 19:44:04 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:04.063241241Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=662
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.104450111Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.134055836Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.134219239Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.134305941Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.134410944Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.135199160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.135305462Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.135654870Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.135769172Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.135804673Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.135821673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.136336584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.137248403Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.140420670Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.141827300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.142126006Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.142224508Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.142787620Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.142915623Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.142941524Z" level=info msg="metadata content store policy set" policy=shared
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.152759431Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.152901034Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.152928034Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.152946735Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.152966935Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.153080538Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.153918955Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154252862Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154366765Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154394865Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154415066Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154432166Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154449567Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154638571Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154770473Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154795074Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154903876Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154932977Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154959277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154977378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154993278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155009078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155024679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155042079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155057079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155071980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155155782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155184882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155200582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155395587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155422187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155442988Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155527189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155575190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155592891Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155770595Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155825796Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155853496Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155867597Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155935998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155978599Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155999899Z" level=info msg="NRI interface is disabled by configuration."
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.156328506Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.156385407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.156432608Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.156511310Z" level=info msg="containerd successfully booted in 0.055612s"
	Apr 15 19:44:05 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:05.123878157Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 15 19:44:05 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:05.191823302Z" level=info msg="Loading containers: start."
	Apr 15 19:44:05 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:05.555157713Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 15 19:44:05 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:05.646326773Z" level=info msg="Loading containers: done."
	Apr 15 19:44:05 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:05.733678951Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 15 19:44:05 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:05.734839976Z" level=info msg="Daemon has completed initialization"
	Apr 15 19:44:05 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:05.796069592Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 15 19:44:05 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:05.796415000Z" level=info msg="API listen on [::]:2376"
	Apr 15 19:44:05 multinode-841000-m03 systemd[1]: Started Docker Application Container Engine.
	Apr 15 19:44:34 multinode-841000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Apr 15 19:44:34 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:34.272911223Z" level=info msg="Processing signal 'terminated'"
	Apr 15 19:44:34 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:34.275299827Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 15 19:44:34 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:34.275637528Z" level=info msg="Daemon shutdown complete"
	Apr 15 19:44:34 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:34.275706028Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 15 19:44:34 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:34.275753028Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 15 19:44:35 multinode-841000-m03 systemd[1]: docker.service: Deactivated successfully.
	Apr 15 19:44:35 multinode-841000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Apr 15 19:44:35 multinode-841000-m03 systemd[1]: Starting Docker Application Container Engine...
	Apr 15 19:44:35 multinode-841000-m03 dockerd[1029]: time="2024-04-15T19:44:35.366142787Z" level=info msg="Starting up"
	Apr 15 19:45:35 multinode-841000-m03 dockerd[1029]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 15 19:45:35 multinode-841000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 15 19:45:35 multinode-841000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 15 19:45:35 multinode-841000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 15 19:44:03 multinode-841000-m03 systemd[1]: Starting Docker Application Container Engine...
	Apr 15 19:44:04 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:04.059316558Z" level=info msg="Starting up"
	Apr 15 19:44:04 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:04.061777110Z" level=info msg="containerd not running, starting managed containerd"
	Apr 15 19:44:04 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:04.063241241Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=662
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.104450111Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.134055836Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.134219239Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.134305941Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.134410944Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.135199160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.135305462Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.135654870Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.135769172Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.135804673Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.135821673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.136336584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.137248403Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.140420670Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.141827300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.142126006Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.142224508Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.142787620Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.142915623Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.142941524Z" level=info msg="metadata content store policy set" policy=shared
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.152759431Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.152901034Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.152928034Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.152946735Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.152966935Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.153080538Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.153918955Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154252862Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154366765Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154394865Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154415066Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154432166Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154449567Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154638571Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154770473Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154795074Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154903876Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154932977Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154959277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154977378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154993278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155009078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155024679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155042079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155057079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155071980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155155782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155184882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155200582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155395587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155422187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155442988Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155527189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155575190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155592891Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155770595Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155825796Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155853496Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155867597Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155935998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155978599Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155999899Z" level=info msg="NRI interface is disabled by configuration."
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.156328506Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.156385407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.156432608Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.156511310Z" level=info msg="containerd successfully booted in 0.055612s"
	Apr 15 19:44:05 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:05.123878157Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 15 19:44:05 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:05.191823302Z" level=info msg="Loading containers: start."
	Apr 15 19:44:05 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:05.555157713Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 15 19:44:05 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:05.646326773Z" level=info msg="Loading containers: done."
	Apr 15 19:44:05 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:05.733678951Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 15 19:44:05 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:05.734839976Z" level=info msg="Daemon has completed initialization"
	Apr 15 19:44:05 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:05.796069592Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 15 19:44:05 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:05.796415000Z" level=info msg="API listen on [::]:2376"
	Apr 15 19:44:05 multinode-841000-m03 systemd[1]: Started Docker Application Container Engine.
	Apr 15 19:44:34 multinode-841000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Apr 15 19:44:34 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:34.272911223Z" level=info msg="Processing signal 'terminated'"
	Apr 15 19:44:34 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:34.275299827Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 15 19:44:34 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:34.275637528Z" level=info msg="Daemon shutdown complete"
	Apr 15 19:44:34 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:34.275706028Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 15 19:44:34 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:34.275753028Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 15 19:44:35 multinode-841000-m03 systemd[1]: docker.service: Deactivated successfully.
	Apr 15 19:44:35 multinode-841000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Apr 15 19:44:35 multinode-841000-m03 systemd[1]: Starting Docker Application Container Engine...
	Apr 15 19:44:35 multinode-841000-m03 dockerd[1029]: time="2024-04-15T19:44:35.366142787Z" level=info msg="Starting up"
	Apr 15 19:45:35 multinode-841000-m03 dockerd[1029]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 15 19:45:35 multinode-841000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 15 19:45:35 multinode-841000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 15 19:45:35 multinode-841000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0415 19:45:35.442120    2004 out.go:239] * 
	* 
	W0415 19:45:35.461795    2004 out.go:239] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_5d8e12b0f871eb72ad0fbd8a3f088de82e3341c0_0.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_5d8e12b0f871eb72ad0fbd8a3f088de82e3341c0_0.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 19:45:35.464646    2004 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: W0415 19:42:33.112709    2004 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0415 19:42:33.203280    2004 out.go:291] Setting OutFile to fd 892 ...
I0415 19:42:33.219482    2004 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 19:42:33.219482    2004 out.go:304] Setting ErrFile to fd 828...
I0415 19:42:33.219482    2004 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 19:42:33.235839    2004 mustload.go:65] Loading cluster: multinode-841000
I0415 19:42:33.236661    2004 config.go:182] Loaded profile config "multinode-841000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 19:42:33.238337    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
I0415 19:42:35.569660    2004 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0415 19:42:35.569660    2004 main.go:141] libmachine: [stderr =====>] : 
W0415 19:42:35.569660    2004 host.go:58] "multinode-841000-m03" host status: Stopped
I0415 19:42:35.572459    2004 out.go:177] * Starting "multinode-841000-m03" worker node in "multinode-841000" cluster
I0415 19:42:35.574710    2004 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
I0415 19:42:35.574710    2004 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
I0415 19:42:35.575234    2004 cache.go:56] Caching tarball of preloaded images
I0415 19:42:35.575488    2004 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0415 19:42:35.575488    2004 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
I0415 19:42:35.576110    2004 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\config.json ...
I0415 19:42:35.578242    2004 start.go:360] acquireMachinesLock for multinode-841000-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0415 19:42:35.578765    2004 start.go:364] duration metric: took 522.5µs to acquireMachinesLock for "multinode-841000-m03"
I0415 19:42:35.578928    2004 start.go:96] Skipping create...Using existing machine configuration
I0415 19:42:35.578928    2004 fix.go:54] fixHost starting: m03
I0415 19:42:35.579675    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
I0415 19:42:37.862784    2004 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0415 19:42:37.862784    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:42:37.863145    2004 fix.go:112] recreateIfNeeded on multinode-841000-m03: state=Stopped err=<nil>
W0415 19:42:37.863145    2004 fix.go:138] unexpected machine state, will restart: <nil>
I0415 19:42:37.865644    2004 out.go:177] * Restarting existing hyperv VM for "multinode-841000-m03" ...
I0415 19:42:37.870175    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-841000-m03
I0415 19:42:41.152716    2004 main.go:141] libmachine: [stdout =====>] : 
I0415 19:42:41.152884    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:42:41.152935    2004 main.go:141] libmachine: Waiting for host to start...
I0415 19:42:41.153084    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
I0415 19:42:43.597342    2004 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0415 19:42:43.597815    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:42:43.598126    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
I0415 19:42:46.254388    2004 main.go:141] libmachine: [stdout =====>] : 
I0415 19:42:46.255030    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:42:47.256517    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
I0415 19:42:49.589568    2004 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0415 19:42:49.590513    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:42:49.590513    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
I0415 19:42:52.332251    2004 main.go:141] libmachine: [stdout =====>] : 
I0415 19:42:52.332324    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:42:53.334962    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
I0415 19:42:55.724147    2004 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0415 19:42:55.724147    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:42:55.725049    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
I0415 19:42:58.483296    2004 main.go:141] libmachine: [stdout =====>] : 
I0415 19:42:58.483345    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:42:59.490349    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
I0415 19:43:01.991809    2004 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0415 19:43:01.991809    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:43:01.992326    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
I0415 19:43:04.696744    2004 main.go:141] libmachine: [stdout =====>] : 
I0415 19:43:04.696744    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:43:05.702893    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
I0415 19:43:08.093270    2004 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0415 19:43:08.094279    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:43:08.094372    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
I0415 19:43:10.930189    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34

                                                
                                                
I0415 19:43:10.931163    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:43:10.934232    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
I0415 19:43:13.281096    2004 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0415 19:43:13.281096    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:43:13.282088    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
I0415 19:43:16.086258    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34

                                                
                                                
I0415 19:43:16.086258    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:43:16.086773    2004 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\config.json ...
I0415 19:43:16.089898    2004 machine.go:94] provisionDockerMachine start ...
I0415 19:43:16.089898    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
I0415 19:43:18.446820    2004 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0415 19:43:18.446820    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:43:18.447126    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
I0415 19:43:21.175668    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34

                                                
                                                
I0415 19:43:21.175668    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:43:21.184108    2004 main.go:141] libmachine: Using SSH client type: native
I0415 19:43:21.185149    2004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.52.34 22 <nil> <nil>}
I0415 19:43:21.185149    2004 main.go:141] libmachine: About to run SSH command:
hostname
I0415 19:43:21.322813    2004 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube

                                                
                                                
I0415 19:43:21.322951    2004 buildroot.go:166] provisioning hostname "multinode-841000-m03"
I0415 19:43:21.323085    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
I0415 19:43:23.620284    2004 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0415 19:43:23.621080    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:43:23.621175    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
I0415 19:43:26.432019    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34

                                                
                                                
I0415 19:43:26.432065    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:43:26.438546    2004 main.go:141] libmachine: Using SSH client type: native
I0415 19:43:26.439300    2004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.52.34 22 <nil> <nil>}
I0415 19:43:26.439374    2004 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-841000-m03 && echo "multinode-841000-m03" | sudo tee /etc/hostname
I0415 19:43:26.598123    2004 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-841000-m03

                                                
                                                
I0415 19:43:26.598123    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
I0415 19:43:28.921131    2004 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0415 19:43:28.921131    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:43:28.921652    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
I0415 19:43:31.675928    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34

                                                
                                                
I0415 19:43:31.675928    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:43:31.682025    2004 main.go:141] libmachine: Using SSH client type: native
I0415 19:43:31.682744    2004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.52.34 22 <nil> <nil>}
I0415 19:43:31.682744    2004 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\smultinode-841000-m03' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-841000-m03/g' /etc/hosts;
			else 
				echo '127.0.1.1 multinode-841000-m03' | sudo tee -a /etc/hosts; 
			fi
		fi
I0415 19:43:31.822727    2004 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0415 19:43:31.822727    2004 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
I0415 19:43:31.823303    2004 buildroot.go:174] setting up certificates
I0415 19:43:31.823359    2004 provision.go:84] configureAuth start
I0415 19:43:31.823423    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
I0415 19:43:34.171282    2004 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0415 19:43:34.171282    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:43:34.171597    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
I0415 19:43:36.994443    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34

                                                
                                                
I0415 19:43:36.994630    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:43:36.994810    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
I0415 19:43:39.350090    2004 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0415 19:43:39.350090    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:43:39.350090    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
I0415 19:43:42.135294    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34

                                                
                                                
I0415 19:43:42.136077    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:43:42.136077    2004 provision.go:143] copyHostCerts
I0415 19:43:42.136343    2004 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
I0415 19:43:42.136667    2004 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
I0415 19:43:42.136667    2004 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
I0415 19:43:42.137311    2004 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
I0415 19:43:42.138744    2004 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
I0415 19:43:42.139036    2004 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
I0415 19:43:42.139117    2004 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
I0415 19:43:42.139172    2004 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
I0415 19:43:42.140232    2004 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
I0415 19:43:42.140232    2004 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
I0415 19:43:42.140752    2004 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
I0415 19:43:42.141005    2004 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
I0415 19:43:42.141978    2004 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-841000-m03 san=[127.0.0.1 172.19.52.34 localhost minikube multinode-841000-m03]
I0415 19:43:42.451857    2004 provision.go:177] copyRemoteCerts
I0415 19:43:42.469375    2004 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0415 19:43:42.469375    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
I0415 19:43:44.787640    2004 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0415 19:43:44.788084    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:43:44.788084    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
I0415 19:43:47.596004    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34

                                                
                                                
I0415 19:43:47.596627    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:43:47.596684    2004 sshutil.go:53] new ssh client: &{IP:172.19.52.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m03\id_rsa Username:docker}
I0415 19:43:47.714046    2004 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.2446294s)
I0415 19:43:47.714046    2004 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
I0415 19:43:47.715097    2004 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0415 19:43:47.767193    2004 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
I0415 19:43:47.767544    2004 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
I0415 19:43:47.815392    2004 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
I0415 19:43:47.816347    2004 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0415 19:43:47.863932    2004 provision.go:87] duration metric: took 16.0404461s to configureAuth
I0415 19:43:47.864010    2004 buildroot.go:189] setting minikube options for container-runtime
I0415 19:43:47.864722    2004 config.go:182] Loaded profile config "multinode-841000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 19:43:47.864980    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
I0415 19:43:50.161606    2004 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0415 19:43:50.161606    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:43:50.162168    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
I0415 19:43:52.894259    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34

                                                
                                                
I0415 19:43:52.894259    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:43:52.901995    2004 main.go:141] libmachine: Using SSH client type: native
I0415 19:43:52.901995    2004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.52.34 22 <nil> <nil>}
I0415 19:43:52.901995    2004 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0415 19:43:53.037895    2004 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs

                                                
                                                
I0415 19:43:53.037895    2004 buildroot.go:70] root file system type: tmpfs
I0415 19:43:53.038223    2004 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0415 19:43:53.038223    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
I0415 19:43:55.354294    2004 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0415 19:43:55.354294    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:43:55.354476    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
I0415 19:43:58.092098    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34

                                                
                                                
I0415 19:43:58.092098    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:43:58.098248    2004 main.go:141] libmachine: Using SSH client type: native
I0415 19:43:58.098965    2004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.52.34 22 <nil> <nil>}
I0415 19:43:58.099053    2004 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0415 19:43:58.255288    2004 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I0415 19:43:58.255413    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
I0415 19:44:00.576546    2004 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0415 19:44:00.576697    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:44:00.576766    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
I0415 19:44:03.369482    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34

                                                
                                                
I0415 19:44:03.369482    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:44:03.373274    2004 main.go:141] libmachine: Using SSH client type: native
I0415 19:44:03.373274    2004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.52.34 22 <nil> <nil>}
I0415 19:44:03.373274    2004 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0415 19:44:05.789123    2004 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

                                                
                                                
I0415 19:44:05.789123    2004 machine.go:97] duration metric: took 49.6988314s to provisionDockerMachine
I0415 19:44:05.789123    2004 start.go:293] postStartSetup for "multinode-841000-m03" (driver="hyperv")
I0415 19:44:05.789123    2004 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0415 19:44:05.805306    2004 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0415 19:44:05.805306    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
I0415 19:44:08.123669    2004 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0415 19:44:08.123669    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:44:08.124720    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
I0415 19:44:10.867461    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34

                                                
                                                
I0415 19:44:10.867461    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:44:10.869050    2004 sshutil.go:53] new ssh client: &{IP:172.19.52.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m03\id_rsa Username:docker}
I0415 19:44:10.983875    2004 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1785284s)
I0415 19:44:10.999114    2004 ssh_runner.go:195] Run: cat /etc/os-release
I0415 19:44:11.009152    2004 info.go:137] Remote host: Buildroot 2023.02.9
I0415 19:44:11.009152    2004 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
I0415 19:44:11.010093    2004 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
I0415 19:44:11.010614    2004 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> 112722.pem in /etc/ssl/certs
I0415 19:44:11.010614    2004 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /etc/ssl/certs/112722.pem
I0415 19:44:11.025771    2004 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0415 19:44:11.045432    2004 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /etc/ssl/certs/112722.pem (1708 bytes)
I0415 19:44:11.094837    2004 start.go:296] duration metric: took 5.3056717s for postStartSetup
I0415 19:44:11.094837    2004 fix.go:56] duration metric: took 1m35.5151493s for fixHost
I0415 19:44:11.094837    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
I0415 19:44:13.388503    2004 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0415 19:44:13.388503    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:44:13.388914    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
I0415 19:44:16.118172    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34

                                                
                                                
I0415 19:44:16.118871    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:44:16.125313    2004 main.go:141] libmachine: Using SSH client type: native
I0415 19:44:16.125836    2004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.52.34 22 <nil> <nil>}
I0415 19:44:16.125836    2004 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0415 19:44:16.253766    2004 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713210256.264179299

                                                
                                                
I0415 19:44:16.254427    2004 fix.go:216] guest clock: 1713210256.264179299
I0415 19:44:16.254479    2004 fix.go:229] Guest: 2024-04-15 19:44:16.264179299 +0000 UTC Remote: 2024-04-15 19:44:11.094837 +0000 UTC m=+98.091758001 (delta=5.169342299s)
I0415 19:44:16.254479    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
I0415 19:44:18.562134    2004 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0415 19:44:18.562325    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:44:18.562416    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
I0415 19:44:21.356693    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34

                                                
                                                
I0415 19:44:21.356693    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:44:21.363824    2004 main.go:141] libmachine: Using SSH client type: native
I0415 19:44:21.364564    2004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.52.34 22 <nil> <nil>}
I0415 19:44:21.364564    2004 main.go:141] libmachine: About to run SSH command:
sudo date -s @1713210256
I0415 19:44:21.507546    2004 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 15 19:44:16 UTC 2024

                                                
                                                
I0415 19:44:21.507546    2004 fix.go:236] clock set: Mon Apr 15 19:44:16 UTC 2024
(err=<nil>)
I0415 19:44:21.507546    2004 start.go:83] releasing machines lock for "multinode-841000-m03", held for 1m45.9278438s
I0415 19:44:21.508229    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
I0415 19:44:23.904211    2004 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0415 19:44:23.904211    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:44:23.905112    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
I0415 19:44:26.722707    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34

                                                
                                                
I0415 19:44:26.722707    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:44:26.729536    2004 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0415 19:44:26.729536    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
I0415 19:44:26.744162    2004 ssh_runner.go:195] Run: systemctl --version
I0415 19:44:26.744162    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
I0415 19:44:29.149427    2004 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0415 19:44:29.149427    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:44:29.149427    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
I0415 19:44:29.150062    2004 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0415 19:44:29.150171    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:44:29.150171    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
I0415 19:44:32.013904    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34

                                                
                                                
I0415 19:44:32.014237    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:44:32.014295    2004 sshutil.go:53] new ssh client: &{IP:172.19.52.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m03\id_rsa Username:docker}
I0415 19:44:32.044181    2004 main.go:141] libmachine: [stdout =====>] : 172.19.52.34

                                                
                                                
I0415 19:44:32.044181    2004 main.go:141] libmachine: [stderr =====>] : 
I0415 19:44:32.046597    2004 sshutil.go:53] new ssh client: &{IP:172.19.52.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m03\id_rsa Username:docker}
I0415 19:44:32.194366    2004 ssh_runner.go:235] Completed: systemctl --version: (5.4497484s)
I0415 19:44:32.194366    2004 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.464787s)
I0415 19:44:32.208933    2004 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0415 19:44:32.219180    2004 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0415 19:44:32.240348    2004 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0415 19:44:32.274936    2004 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0415 19:44:32.275141    2004 start.go:494] detecting cgroup driver to use...
I0415 19:44:32.275141    2004 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0415 19:44:32.328716    2004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0415 19:44:32.369496    2004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0415 19:44:32.392183    2004 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0415 19:44:32.407159    2004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0415 19:44:32.444437    2004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0415 19:44:32.480450    2004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0415 19:44:32.515107    2004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0415 19:44:32.555800    2004 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0415 19:44:32.591905    2004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0415 19:44:32.628414    2004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0415 19:44:32.670268    2004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0415 19:44:32.710037    2004 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0415 19:44:32.749993    2004 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0415 19:44:32.784700    2004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0415 19:44:33.006816    2004 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0415 19:44:33.042141    2004 start.go:494] detecting cgroup driver to use...
I0415 19:44:33.056631    2004 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0415 19:44:33.097907    2004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0415 19:44:33.139065    2004 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0415 19:44:33.195272    2004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0415 19:44:33.238313    2004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0415 19:44:33.280398    2004 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0415 19:44:33.345187    2004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0415 19:44:33.373595    2004 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0415 19:44:33.434870    2004 ssh_runner.go:195] Run: which cri-dockerd
I0415 19:44:33.457267    2004 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0415 19:44:33.478577    2004 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0415 19:44:33.531852    2004 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0415 19:44:33.763080    2004 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0415 19:44:33.963615    2004 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0415 19:44:33.963615    2004 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0415 19:44:34.020637    2004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0415 19:44:34.236897    2004 ssh_runner.go:195] Run: sudo systemctl restart docker
I0415 19:45:35.386371    2004 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1489854s)
I0415 19:45:35.400770    2004 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
I0415 19:45:35.439090    2004 out.go:177] 
W0415 19:45:35.442120    2004 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.

                                                
                                                
sudo journalctl --no-pager -u docker:
-- stdout --
Apr 15 19:44:03 multinode-841000-m03 systemd[1]: Starting Docker Application Container Engine...
Apr 15 19:44:04 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:04.059316558Z" level=info msg="Starting up"
Apr 15 19:44:04 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:04.061777110Z" level=info msg="containerd not running, starting managed containerd"
Apr 15 19:44:04 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:04.063241241Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=662
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.104450111Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.134055836Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.134219239Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.134305941Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.134410944Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.135199160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.135305462Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.135654870Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.135769172Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.135804673Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.135821673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.136336584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.137248403Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.140420670Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.141827300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.142126006Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.142224508Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.142787620Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.142915623Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.142941524Z" level=info msg="metadata content store policy set" policy=shared
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.152759431Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.152901034Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.152928034Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.152946735Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.152966935Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.153080538Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.153918955Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154252862Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154366765Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154394865Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154415066Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154432166Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154449567Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154638571Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154770473Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154795074Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154903876Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154932977Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154959277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154977378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154993278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155009078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155024679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155042079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155057079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155071980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155155782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155184882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155200582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155395587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155422187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155442988Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155527189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155575190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155592891Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155770595Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155825796Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155853496Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155867597Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155935998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155978599Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155999899Z" level=info msg="NRI interface is disabled by configuration."
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.156328506Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.156385407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.156432608Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.156511310Z" level=info msg="containerd successfully booted in 0.055612s"
Apr 15 19:44:05 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:05.123878157Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Apr 15 19:44:05 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:05.191823302Z" level=info msg="Loading containers: start."
Apr 15 19:44:05 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:05.555157713Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Apr 15 19:44:05 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:05.646326773Z" level=info msg="Loading containers: done."
Apr 15 19:44:05 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:05.733678951Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
Apr 15 19:44:05 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:05.734839976Z" level=info msg="Daemon has completed initialization"
Apr 15 19:44:05 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:05.796069592Z" level=info msg="API listen on /var/run/docker.sock"
Apr 15 19:44:05 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:05.796415000Z" level=info msg="API listen on [::]:2376"
Apr 15 19:44:05 multinode-841000-m03 systemd[1]: Started Docker Application Container Engine.
Apr 15 19:44:34 multinode-841000-m03 systemd[1]: Stopping Docker Application Container Engine...
Apr 15 19:44:34 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:34.272911223Z" level=info msg="Processing signal 'terminated'"
Apr 15 19:44:34 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:34.275299827Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Apr 15 19:44:34 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:34.275637528Z" level=info msg="Daemon shutdown complete"
Apr 15 19:44:34 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:34.275706028Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Apr 15 19:44:34 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:34.275753028Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Apr 15 19:44:35 multinode-841000-m03 systemd[1]: docker.service: Deactivated successfully.
Apr 15 19:44:35 multinode-841000-m03 systemd[1]: Stopped Docker Application Container Engine.
Apr 15 19:44:35 multinode-841000-m03 systemd[1]: Starting Docker Application Container Engine...
Apr 15 19:44:35 multinode-841000-m03 dockerd[1029]: time="2024-04-15T19:44:35.366142787Z" level=info msg="Starting up"
Apr 15 19:45:35 multinode-841000-m03 dockerd[1029]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 15 19:45:35 multinode-841000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 15 19:45:35 multinode-841000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 15 19:45:35 multinode-841000-m03 systemd[1]: Failed to start Docker Application Container Engine.

                                                
                                                
-- /stdout --
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.

                                                
                                                
sudo journalctl --no-pager -u docker:
-- stdout --
Apr 15 19:44:03 multinode-841000-m03 systemd[1]: Starting Docker Application Container Engine...
Apr 15 19:44:04 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:04.059316558Z" level=info msg="Starting up"
Apr 15 19:44:04 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:04.061777110Z" level=info msg="containerd not running, starting managed containerd"
Apr 15 19:44:04 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:04.063241241Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=662
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.104450111Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.134055836Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.134219239Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.134305941Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.134410944Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.135199160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.135305462Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.135654870Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.135769172Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.135804673Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.135821673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.136336584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.137248403Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.140420670Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.141827300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.142126006Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.142224508Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.142787620Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.142915623Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.142941524Z" level=info msg="metadata content store policy set" policy=shared
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.152759431Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.152901034Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.152928034Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.152946735Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.152966935Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.153080538Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.153918955Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154252862Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154366765Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154394865Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154415066Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154432166Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154449567Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154638571Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154770473Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154795074Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154903876Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154932977Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154959277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154977378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.154993278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155009078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155024679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155042079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155057079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155071980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155155782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155184882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155200582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155395587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155422187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155442988Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155527189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155575190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155592891Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155770595Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155825796Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155853496Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155867597Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155935998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155978599Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.155999899Z" level=info msg="NRI interface is disabled by configuration."
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.156328506Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.156385407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.156432608Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Apr 15 19:44:04 multinode-841000-m03 dockerd[662]: time="2024-04-15T19:44:04.156511310Z" level=info msg="containerd successfully booted in 0.055612s"
Apr 15 19:44:05 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:05.123878157Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Apr 15 19:44:05 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:05.191823302Z" level=info msg="Loading containers: start."
Apr 15 19:44:05 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:05.555157713Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Apr 15 19:44:05 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:05.646326773Z" level=info msg="Loading containers: done."
Apr 15 19:44:05 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:05.733678951Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
Apr 15 19:44:05 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:05.734839976Z" level=info msg="Daemon has completed initialization"
Apr 15 19:44:05 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:05.796069592Z" level=info msg="API listen on /var/run/docker.sock"
Apr 15 19:44:05 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:05.796415000Z" level=info msg="API listen on [::]:2376"
Apr 15 19:44:05 multinode-841000-m03 systemd[1]: Started Docker Application Container Engine.
Apr 15 19:44:34 multinode-841000-m03 systemd[1]: Stopping Docker Application Container Engine...
Apr 15 19:44:34 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:34.272911223Z" level=info msg="Processing signal 'terminated'"
Apr 15 19:44:34 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:34.275299827Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Apr 15 19:44:34 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:34.275637528Z" level=info msg="Daemon shutdown complete"
Apr 15 19:44:34 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:34.275706028Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Apr 15 19:44:34 multinode-841000-m03 dockerd[655]: time="2024-04-15T19:44:34.275753028Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Apr 15 19:44:35 multinode-841000-m03 systemd[1]: docker.service: Deactivated successfully.
Apr 15 19:44:35 multinode-841000-m03 systemd[1]: Stopped Docker Application Container Engine.
Apr 15 19:44:35 multinode-841000-m03 systemd[1]: Starting Docker Application Container Engine...
Apr 15 19:44:35 multinode-841000-m03 dockerd[1029]: time="2024-04-15T19:44:35.366142787Z" level=info msg="Starting up"
Apr 15 19:45:35 multinode-841000-m03 dockerd[1029]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 15 19:45:35 multinode-841000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 15 19:45:35 multinode-841000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 15 19:45:35 multinode-841000-m03 systemd[1]: Failed to start Docker Application Container Engine.

                                                
                                                
-- /stdout --
W0415 19:45:35.442120    2004 out.go:239] * 
* 
W0415 19:45:35.461795    2004 out.go:239] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                      │
│    * If the above advice does not help, please let us know:                                                          │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
│                                                                                                                      │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
│    * Please also attach the following file to the GitHub issue:                                                      │
│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_5d8e12b0f871eb72ad0fbd8a3f088de82e3341c0_0.log    │
│                                                                                                                      │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                      │
│    * If the above advice does not help, please let us know:                                                          │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
│                                                                                                                      │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
│    * Please also attach the following file to the GitHub issue:                                                      │
│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_5d8e12b0f871eb72ad0fbd8a3f088de82e3341c0_0.log    │
│                                                                                                                      │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0415 19:45:35.464646    2004 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-windows-amd64.exe -p multinode-841000 node start m03 -v=7 --alsologtostderr": exit status 90
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-841000 status -v=7 --alsologtostderr: exit status 2 (38.4270371s)

                                                
                                                
-- stdout --
	multinode-841000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-841000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-841000-m03
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 19:45:36.057016    1460 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0415 19:45:36.148539    1460 out.go:291] Setting OutFile to fd 960 ...
	I0415 19:45:36.149174    1460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 19:45:36.149703    1460 out.go:304] Setting ErrFile to fd 1004...
	I0415 19:45:36.149703    1460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 19:45:36.164310    1460 out.go:298] Setting JSON to false
	I0415 19:45:36.164536    1460 mustload.go:65] Loading cluster: multinode-841000
	I0415 19:45:36.164536    1460 notify.go:220] Checking for updates...
	I0415 19:45:36.165557    1460 config.go:182] Loaded profile config "multinode-841000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 19:45:36.165619    1460 status.go:255] checking status of multinode-841000 ...
	I0415 19:45:36.166579    1460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:45:38.508325    1460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:45:38.508325    1460 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:45:38.508325    1460 status.go:330] multinode-841000 host status = "Running" (err=<nil>)
	I0415 19:45:38.508325    1460 host.go:66] Checking if "multinode-841000" exists ...
	I0415 19:45:38.509090    1460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:45:40.877402    1460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:45:40.877474    1460 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:45:40.877474    1460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:45:43.684651    1460 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:45:43.684964    1460 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:45:43.684964    1460 host.go:66] Checking if "multinode-841000" exists ...
	I0415 19:45:43.699804    1460 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 19:45:43.699804    1460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:45:46.068468    1460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:45:46.068531    1460 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:45:46.068609    1460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:45:48.829408    1460 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:45:48.829408    1460 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:45:48.830743    1460 sshutil.go:53] new ssh client: &{IP:172.19.62.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000\id_rsa Username:docker}
	I0415 19:45:48.928511    1460 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.2286651s)
	I0415 19:45:48.942931    1460 ssh_runner.go:195] Run: systemctl --version
	I0415 19:45:48.967727    1460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 19:45:48.996718    1460 kubeconfig.go:125] found "multinode-841000" server: "https://172.19.62.237:8443"
	I0415 19:45:48.996775    1460 api_server.go:166] Checking apiserver status ...
	I0415 19:45:49.009364    1460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 19:45:49.049953    1460 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2019/cgroup
	W0415 19:45:49.072303    1460 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2019/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0415 19:45:49.085193    1460 ssh_runner.go:195] Run: ls
	I0415 19:45:49.093968    1460 api_server.go:253] Checking apiserver healthz at https://172.19.62.237:8443/healthz ...
	I0415 19:45:49.103506    1460 api_server.go:279] https://172.19.62.237:8443/healthz returned 200:
	ok
	I0415 19:45:49.104113    1460 status.go:422] multinode-841000 apiserver status = Running (err=<nil>)
	I0415 19:45:49.104113    1460 status.go:257] multinode-841000 status: &{Name:multinode-841000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 19:45:49.104113    1460 status.go:255] checking status of multinode-841000-m02 ...
	I0415 19:45:49.104186    1460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:45:51.390555    1460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:45:51.390555    1460 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:45:51.391461    1460 status.go:330] multinode-841000-m02 host status = "Running" (err=<nil>)
	I0415 19:45:51.391519    1460 host.go:66] Checking if "multinode-841000-m02" exists ...
	I0415 19:45:51.392071    1460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:45:53.732198    1460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:45:53.732198    1460 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:45:53.732951    1460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:45:56.495702    1460 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:45:56.495702    1460 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:45:56.495702    1460 host.go:66] Checking if "multinode-841000-m02" exists ...
	I0415 19:45:56.511850    1460 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 19:45:56.511850    1460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:45:58.767528    1460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:45:58.767528    1460 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:45:58.767528    1460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:46:01.553167    1460 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:46:01.553167    1460 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:46:01.554351    1460 sshutil.go:53] new ssh client: &{IP:172.19.55.167 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m02\id_rsa Username:docker}
	I0415 19:46:01.657494    1460 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.1456035s)
	I0415 19:46:01.672386    1460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 19:46:01.702304    1460 status.go:257] multinode-841000-m02 status: &{Name:multinode-841000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0415 19:46:01.702409    1460 status.go:255] checking status of multinode-841000-m03 ...
	I0415 19:46:01.702493    1460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
	I0415 19:46:04.022548    1460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:46:04.022548    1460 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:46:04.023600    1460 status.go:330] multinode-841000-m03 host status = "Running" (err=<nil>)
	I0415 19:46:04.023600    1460 host.go:66] Checking if "multinode-841000-m03" exists ...
	I0415 19:46:04.024075    1460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
	I0415 19:46:06.365302    1460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:46:06.365836    1460 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:46:06.365836    1460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
	I0415 19:46:09.100032    1460 main.go:141] libmachine: [stdout =====>] : 172.19.52.34
	
	I0415 19:46:09.100147    1460 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:46:09.100248    1460 host.go:66] Checking if "multinode-841000-m03" exists ...
	I0415 19:46:09.117164    1460 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 19:46:09.117164    1460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
	I0415 19:46:11.434214    1460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:46:11.434214    1460 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:46:11.434812    1460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
	I0415 19:46:14.177031    1460 main.go:141] libmachine: [stdout =====>] : 172.19.52.34
	
	I0415 19:46:14.177031    1460 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:46:14.177999    1460 sshutil.go:53] new ssh client: &{IP:172.19.52.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m03\id_rsa Username:docker}
	I0415 19:46:14.269179    1460 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.1519738s)
	I0415 19:46:14.284135    1460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 19:46:14.308541    1460 status.go:257] multinode-841000-m03 status: &{Name:multinode-841000-m03 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 status -v=7 --alsologtostderr
E0415 19:46:53.590699   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-841000 status -v=7 --alsologtostderr: exit status 2 (38.5124096s)

                                                
                                                
-- stdout --
	multinode-841000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-841000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-841000-m03
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 19:46:15.549836    4060 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0415 19:46:15.637257    4060 out.go:291] Setting OutFile to fd 1012 ...
	I0415 19:46:15.638239    4060 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 19:46:15.638239    4060 out.go:304] Setting ErrFile to fd 960...
	I0415 19:46:15.638239    4060 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 19:46:15.654443    4060 out.go:298] Setting JSON to false
	I0415 19:46:15.654524    4060 mustload.go:65] Loading cluster: multinode-841000
	I0415 19:46:15.654524    4060 notify.go:220] Checking for updates...
	I0415 19:46:15.655249    4060 config.go:182] Loaded profile config "multinode-841000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 19:46:15.655249    4060 status.go:255] checking status of multinode-841000 ...
	I0415 19:46:15.656440    4060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:46:17.970066    4060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:46:17.970066    4060 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:46:17.970066    4060 status.go:330] multinode-841000 host status = "Running" (err=<nil>)
	I0415 19:46:17.970066    4060 host.go:66] Checking if "multinode-841000" exists ...
	I0415 19:46:17.970792    4060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:46:20.330849    4060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:46:20.330849    4060 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:46:20.331465    4060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:46:23.108750    4060 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:46:23.108750    4060 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:46:23.108750    4060 host.go:66] Checking if "multinode-841000" exists ...
	I0415 19:46:23.125283    4060 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 19:46:23.125283    4060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:46:25.451892    4060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:46:25.451892    4060 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:46:25.451892    4060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:46:28.244252    4060 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:46:28.245065    4060 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:46:28.245561    4060 sshutil.go:53] new ssh client: &{IP:172.19.62.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000\id_rsa Username:docker}
	I0415 19:46:28.343474    4060 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.2181492s)
	I0415 19:46:28.362369    4060 ssh_runner.go:195] Run: systemctl --version
	I0415 19:46:28.393229    4060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 19:46:28.428232    4060 kubeconfig.go:125] found "multinode-841000" server: "https://172.19.62.237:8443"
	I0415 19:46:28.428306    4060 api_server.go:166] Checking apiserver status ...
	I0415 19:46:28.443017    4060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 19:46:28.486535    4060 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2019/cgroup
	W0415 19:46:28.508200    4060 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2019/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0415 19:46:28.526243    4060 ssh_runner.go:195] Run: ls
	I0415 19:46:28.533112    4060 api_server.go:253] Checking apiserver healthz at https://172.19.62.237:8443/healthz ...
	I0415 19:46:28.541692    4060 api_server.go:279] https://172.19.62.237:8443/healthz returned 200:
	ok
	I0415 19:46:28.541692    4060 status.go:422] multinode-841000 apiserver status = Running (err=<nil>)
	I0415 19:46:28.541692    4060 status.go:257] multinode-841000 status: &{Name:multinode-841000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 19:46:28.541830    4060 status.go:255] checking status of multinode-841000-m02 ...
	I0415 19:46:28.542730    4060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:46:30.830963    4060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:46:30.830963    4060 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:46:30.831190    4060 status.go:330] multinode-841000-m02 host status = "Running" (err=<nil>)
	I0415 19:46:30.831190    4060 host.go:66] Checking if "multinode-841000-m02" exists ...
	I0415 19:46:30.833577    4060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:46:33.156508    4060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:46:33.156508    4060 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:46:33.157349    4060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:46:35.957526    4060 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:46:35.957526    4060 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:46:35.957526    4060 host.go:66] Checking if "multinode-841000-m02" exists ...
	I0415 19:46:35.973688    4060 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 19:46:35.973688    4060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:46:38.308777    4060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:46:38.308777    4060 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:46:38.309827    4060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:46:41.096668    4060 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:46:41.096668    4060 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:46:41.096668    4060 sshutil.go:53] new ssh client: &{IP:172.19.55.167 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m02\id_rsa Username:docker}
	I0415 19:46:41.196911    4060 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.2231813s)
	I0415 19:46:41.211548    4060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 19:46:41.244156    4060 status.go:257] multinode-841000-m02 status: &{Name:multinode-841000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0415 19:46:41.244224    4060 status.go:255] checking status of multinode-841000-m03 ...
	I0415 19:46:41.244931    4060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
	I0415 19:46:43.605271    4060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:46:43.605271    4060 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:46:43.605271    4060 status.go:330] multinode-841000-m03 host status = "Running" (err=<nil>)
	I0415 19:46:43.605271    4060 host.go:66] Checking if "multinode-841000-m03" exists ...
	I0415 19:46:43.605955    4060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
	I0415 19:46:45.916448    4060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:46:45.916726    4060 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:46:45.916799    4060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
	I0415 19:46:48.665961    4060 main.go:141] libmachine: [stdout =====>] : 172.19.52.34
	
	I0415 19:46:48.665961    4060 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:46:48.665961    4060 host.go:66] Checking if "multinode-841000-m03" exists ...
	I0415 19:46:48.683794    4060 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 19:46:48.683794    4060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
	I0415 19:46:50.980422    4060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:46:50.980592    4060 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:46:50.980900    4060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m03 ).networkadapters[0]).ipaddresses[0]
	I0415 19:46:53.763779    4060 main.go:141] libmachine: [stdout =====>] : 172.19.52.34
	
	I0415 19:46:53.764312    4060 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:46:53.764442    4060 sshutil.go:53] new ssh client: &{IP:172.19.52.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m03\id_rsa Username:docker}
	I0415 19:46:53.867372    4060 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.1835369s)
	I0415 19:46:53.881390    4060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 19:46:53.909003    4060 status.go:257] multinode-841000-m03 status: &{Name:multinode-841000-m03 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-841000 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-841000 -n multinode-841000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-841000 -n multinode-841000: (13.069977s)
helpers_test.go:244: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 logs -n 25: (9.3824232s)
helpers_test.go:252: TestMultiNode/serial/StartAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|----------------|---------------------|---------------------|
	| cp      | multinode-841000 cp multinode-841000:/home/docker/cp-test.txt                                                            | multinode-841000 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:36 UTC | 15 Apr 24 19:36 UTC |
	|         | multinode-841000-m03:/home/docker/cp-test_multinode-841000_multinode-841000-m03.txt                                      |                  |                   |                |                     |                     |
	| ssh     | multinode-841000 ssh -n                                                                                                  | multinode-841000 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:36 UTC | 15 Apr 24 19:37 UTC |
	|         | multinode-841000 sudo cat                                                                                                |                  |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |                |                     |                     |
	| ssh     | multinode-841000 ssh -n multinode-841000-m03 sudo cat                                                                    | multinode-841000 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:37 UTC | 15 Apr 24 19:37 UTC |
	|         | /home/docker/cp-test_multinode-841000_multinode-841000-m03.txt                                                           |                  |                   |                |                     |                     |
	| cp      | multinode-841000 cp testdata\cp-test.txt                                                                                 | multinode-841000 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:37 UTC | 15 Apr 24 19:37 UTC |
	|         | multinode-841000-m02:/home/docker/cp-test.txt                                                                            |                  |                   |                |                     |                     |
	| ssh     | multinode-841000 ssh -n                                                                                                  | multinode-841000 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:37 UTC | 15 Apr 24 19:37 UTC |
	|         | multinode-841000-m02 sudo cat                                                                                            |                  |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |                |                     |                     |
	| cp      | multinode-841000 cp multinode-841000-m02:/home/docker/cp-test.txt                                                        | multinode-841000 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:37 UTC | 15 Apr 24 19:37 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2855924902\001\cp-test_multinode-841000-m02.txt |                  |                   |                |                     |                     |
	| ssh     | multinode-841000 ssh -n                                                                                                  | multinode-841000 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:37 UTC | 15 Apr 24 19:37 UTC |
	|         | multinode-841000-m02 sudo cat                                                                                            |                  |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |                |                     |                     |
	| cp      | multinode-841000 cp multinode-841000-m02:/home/docker/cp-test.txt                                                        | multinode-841000 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:37 UTC | 15 Apr 24 19:38 UTC |
	|         | multinode-841000:/home/docker/cp-test_multinode-841000-m02_multinode-841000.txt                                          |                  |                   |                |                     |                     |
	| ssh     | multinode-841000 ssh -n                                                                                                  | multinode-841000 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:38 UTC | 15 Apr 24 19:38 UTC |
	|         | multinode-841000-m02 sudo cat                                                                                            |                  |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |                |                     |                     |
	| ssh     | multinode-841000 ssh -n multinode-841000 sudo cat                                                                        | multinode-841000 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:38 UTC | 15 Apr 24 19:38 UTC |
	|         | /home/docker/cp-test_multinode-841000-m02_multinode-841000.txt                                                           |                  |                   |                |                     |                     |
	| cp      | multinode-841000 cp multinode-841000-m02:/home/docker/cp-test.txt                                                        | multinode-841000 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:38 UTC | 15 Apr 24 19:38 UTC |
	|         | multinode-841000-m03:/home/docker/cp-test_multinode-841000-m02_multinode-841000-m03.txt                                  |                  |                   |                |                     |                     |
	| ssh     | multinode-841000 ssh -n                                                                                                  | multinode-841000 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:38 UTC | 15 Apr 24 19:39 UTC |
	|         | multinode-841000-m02 sudo cat                                                                                            |                  |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |                |                     |                     |
	| ssh     | multinode-841000 ssh -n multinode-841000-m03 sudo cat                                                                    | multinode-841000 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:39 UTC | 15 Apr 24 19:39 UTC |
	|         | /home/docker/cp-test_multinode-841000-m02_multinode-841000-m03.txt                                                       |                  |                   |                |                     |                     |
	| cp      | multinode-841000 cp testdata\cp-test.txt                                                                                 | multinode-841000 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:39 UTC | 15 Apr 24 19:39 UTC |
	|         | multinode-841000-m03:/home/docker/cp-test.txt                                                                            |                  |                   |                |                     |                     |
	| ssh     | multinode-841000 ssh -n                                                                                                  | multinode-841000 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:39 UTC | 15 Apr 24 19:39 UTC |
	|         | multinode-841000-m03 sudo cat                                                                                            |                  |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |                |                     |                     |
	| cp      | multinode-841000 cp multinode-841000-m03:/home/docker/cp-test.txt                                                        | multinode-841000 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:39 UTC | 15 Apr 24 19:39 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2855924902\001\cp-test_multinode-841000-m03.txt |                  |                   |                |                     |                     |
	| ssh     | multinode-841000 ssh -n                                                                                                  | multinode-841000 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:39 UTC | 15 Apr 24 19:39 UTC |
	|         | multinode-841000-m03 sudo cat                                                                                            |                  |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |                |                     |                     |
	| cp      | multinode-841000 cp multinode-841000-m03:/home/docker/cp-test.txt                                                        | multinode-841000 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:39 UTC | 15 Apr 24 19:40 UTC |
	|         | multinode-841000:/home/docker/cp-test_multinode-841000-m03_multinode-841000.txt                                          |                  |                   |                |                     |                     |
	| ssh     | multinode-841000 ssh -n                                                                                                  | multinode-841000 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:40 UTC | 15 Apr 24 19:40 UTC |
	|         | multinode-841000-m03 sudo cat                                                                                            |                  |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |                |                     |                     |
	| ssh     | multinode-841000 ssh -n multinode-841000 sudo cat                                                                        | multinode-841000 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:40 UTC | 15 Apr 24 19:40 UTC |
	|         | /home/docker/cp-test_multinode-841000-m03_multinode-841000.txt                                                           |                  |                   |                |                     |                     |
	| cp      | multinode-841000 cp multinode-841000-m03:/home/docker/cp-test.txt                                                        | multinode-841000 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:40 UTC | 15 Apr 24 19:40 UTC |
	|         | multinode-841000-m02:/home/docker/cp-test_multinode-841000-m03_multinode-841000-m02.txt                                  |                  |                   |                |                     |                     |
	| ssh     | multinode-841000 ssh -n                                                                                                  | multinode-841000 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:40 UTC | 15 Apr 24 19:41 UTC |
	|         | multinode-841000-m03 sudo cat                                                                                            |                  |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |                |                     |                     |
	| ssh     | multinode-841000 ssh -n multinode-841000-m02 sudo cat                                                                    | multinode-841000 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:41 UTC | 15 Apr 24 19:41 UTC |
	|         | /home/docker/cp-test_multinode-841000-m03_multinode-841000-m02.txt                                                       |                  |                   |                |                     |                     |
	| node    | multinode-841000 node stop m03                                                                                           | multinode-841000 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:41 UTC | 15 Apr 24 19:41 UTC |
	| node    | multinode-841000 node start                                                                                              | multinode-841000 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:42 UTC |                     |
	|         | m03 -v=7 --alsologtostderr                                                                                               |                  |                   |                |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 19:21:40
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 19:21:40.060634    2716 out.go:291] Setting OutFile to fd 796 ...
	I0415 19:21:40.061212    2716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 19:21:40.061212    2716 out.go:304] Setting ErrFile to fd 656...
	I0415 19:21:40.061212    2716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 19:21:40.085368    2716 out.go:298] Setting JSON to false
	I0415 19:21:40.088968    2716 start.go:129] hostinfo: {"hostname":"minikube6","uptime":20626,"bootTime":1713188273,"procs":186,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0415 19:21:40.088968    2716 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 19:21:40.093025    2716 out.go:177] * [multinode-841000] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0415 19:21:40.100019    2716 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 19:21:40.100019    2716 notify.go:220] Checking for updates...
	I0415 19:21:40.103009    2716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 19:21:40.105581    2716 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0415 19:21:40.109842    2716 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 19:21:40.112764    2716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 19:21:40.115792    2716 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 19:21:45.911983    2716 out.go:177] * Using the hyperv driver based on user configuration
	I0415 19:21:45.915263    2716 start.go:297] selected driver: hyperv
	I0415 19:21:45.915263    2716 start.go:901] validating driver "hyperv" against <nil>
	I0415 19:21:45.915263    2716 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 19:21:45.972261    2716 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 19:21:45.973671    2716 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 19:21:45.973671    2716 cni.go:84] Creating CNI manager for ""
	I0415 19:21:45.973671    2716 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0415 19:21:45.973671    2716 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0415 19:21:45.973671    2716 start.go:340] cluster config:
	{Name:multinode-841000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 19:21:45.974333    2716 iso.go:125] acquiring lock: {Name:mkb11aac800c033551a31c7a773c0461f92e4459 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 19:21:45.978465    2716 out.go:177] * Starting "multinode-841000" primary control-plane node in "multinode-841000" cluster
	I0415 19:21:45.981272    2716 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 19:21:45.981272    2716 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 19:21:45.981272    2716 cache.go:56] Caching tarball of preloaded images
	I0415 19:21:45.981781    2716 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 19:21:45.982093    2716 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 19:21:45.982275    2716 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\config.json ...
	I0415 19:21:45.982275    2716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\config.json: {Name:mk417aea25697d9ce4f3bb1be1051fa880d1f409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 19:21:45.984073    2716 start.go:360] acquireMachinesLock for multinode-841000: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 19:21:45.984073    2716 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-841000"
	I0415 19:21:45.984506    2716 start.go:93] Provisioning new machine with config: &{Name:multinode-841000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.3 ClusterName:multinode-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 19:21:45.984506    2716 start.go:125] createHost starting for "" (driver="hyperv")
	I0415 19:21:45.989753    2716 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 19:21:45.989926    2716 start.go:159] libmachine.API.Create for "multinode-841000" (driver="hyperv")
	I0415 19:21:45.989926    2716 client.go:168] LocalClient.Create starting
	I0415 19:21:45.990713    2716 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0415 19:21:45.990868    2716 main.go:141] libmachine: Decoding PEM data...
	I0415 19:21:45.990868    2716 main.go:141] libmachine: Parsing certificate...
	I0415 19:21:45.990868    2716 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0415 19:21:45.991427    2716 main.go:141] libmachine: Decoding PEM data...
	I0415 19:21:45.991427    2716 main.go:141] libmachine: Parsing certificate...
	I0415 19:21:45.991606    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0415 19:21:48.237174    2716 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0415 19:21:48.237174    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:21:48.238273    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0415 19:21:50.082410    2716 main.go:141] libmachine: [stdout =====>] : False
	
	I0415 19:21:50.082410    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:21:50.083097    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 19:21:51.638692    2716 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 19:21:51.638692    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:21:51.638794    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 19:21:55.520384    2716 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 19:21:55.521108    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:21:55.523627    2716 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0415 19:21:56.104338    2716 main.go:141] libmachine: Creating SSH key...
	I0415 19:21:56.313160    2716 main.go:141] libmachine: Creating VM...
	I0415 19:21:56.313160    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 19:21:59.367792    2716 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 19:21:59.367792    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:21:59.367792    2716 main.go:141] libmachine: Using switch "Default Switch"
	I0415 19:21:59.368086    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 19:22:01.228599    2716 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 19:22:01.228693    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:01.228693    2716 main.go:141] libmachine: Creating VHD
	I0415 19:22:01.228755    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0415 19:22:05.263884    2716 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F2D30E75-2B2A-480A-A926-F1F120B4E376
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0415 19:22:05.263884    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:05.263884    2716 main.go:141] libmachine: Writing magic tar header
	I0415 19:22:05.264533    2716 main.go:141] libmachine: Writing SSH key tar header
	I0415 19:22:05.274223    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0415 19:22:08.613133    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:22:08.613133    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:08.613914    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000\disk.vhd' -SizeBytes 20000MB
	I0415 19:22:11.374881    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:22:11.375432    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:11.375572    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-841000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0415 19:22:15.262016    2716 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-841000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0415 19:22:15.262916    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:15.262916    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-841000 -DynamicMemoryEnabled $false
	I0415 19:22:17.715675    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:22:17.715892    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:17.715949    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-841000 -Count 2
	I0415 19:22:20.036849    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:22:20.037654    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:20.037752    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-841000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000\boot2docker.iso'
	I0415 19:22:22.799227    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:22:22.799227    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:22.799227    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-841000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000\disk.vhd'
	I0415 19:22:25.689574    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:22:25.689903    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:25.689903    2716 main.go:141] libmachine: Starting VM...
	I0415 19:22:25.689903    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-841000
	I0415 19:22:29.012810    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:22:29.012872    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:29.012982    2716 main.go:141] libmachine: Waiting for host to start...
	I0415 19:22:29.013108    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:22:31.456870    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:22:31.457067    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:31.457067    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:22:34.115184    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:22:34.115184    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:35.126466    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:22:37.431717    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:22:37.431717    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:37.432013    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:22:40.110261    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:22:40.110261    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:41.110897    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:22:43.526331    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:22:43.526664    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:43.526664    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:22:46.207371    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:22:46.207603    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:47.213986    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:22:49.558395    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:22:49.558395    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:49.558622    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:22:52.275773    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:22:52.276340    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:53.277303    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:22:55.677874    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:22:55.677874    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:55.678677    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:22:58.430305    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:22:58.431267    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:22:58.431472    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:23:00.717245    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:23:00.717245    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:00.717245    2716 machine.go:94] provisionDockerMachine start ...
	I0415 19:23:00.717831    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:23:03.097790    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:23:03.097790    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:03.098497    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:23:05.862158    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:23:05.862158    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:05.873856    2716 main.go:141] libmachine: Using SSH client type: native
	I0415 19:23:05.885689    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.62.237 22 <nil> <nil>}
	I0415 19:23:05.885689    2716 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 19:23:06.011608    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0415 19:23:06.011608    2716 buildroot.go:166] provisioning hostname "multinode-841000"
	I0415 19:23:06.011608    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:23:08.296656    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:23:08.296656    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:08.296751    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:23:10.992939    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:23:10.993096    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:10.999681    2716 main.go:141] libmachine: Using SSH client type: native
	I0415 19:23:11.000892    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.62.237 22 <nil> <nil>}
	I0415 19:23:11.000960    2716 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-841000 && echo "multinode-841000" | sudo tee /etc/hostname
	I0415 19:23:11.157927    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-841000
	
	I0415 19:23:11.157927    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:23:13.476624    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:23:13.476624    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:13.476624    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:23:16.188133    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:23:16.188197    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:16.194137    2716 main.go:141] libmachine: Using SSH client type: native
	I0415 19:23:16.194449    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.62.237 22 <nil> <nil>}
	I0415 19:23:16.194449    2716 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-841000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-841000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-841000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 19:23:16.333414    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 19:23:16.333414    2716 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0415 19:23:16.333414    2716 buildroot.go:174] setting up certificates
	I0415 19:23:16.333414    2716 provision.go:84] configureAuth start
	I0415 19:23:16.333414    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:23:18.634486    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:23:18.634486    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:18.634486    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:23:21.373180    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:23:21.373180    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:21.373977    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:23:23.669852    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:23:23.669852    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:23.669852    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:23:26.429688    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:23:26.429688    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:26.430626    2716 provision.go:143] copyHostCerts
	I0415 19:23:26.430827    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0415 19:23:26.430880    2716 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0415 19:23:26.430880    2716 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0415 19:23:26.431617    2716 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0415 19:23:26.432626    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0415 19:23:26.432626    2716 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0415 19:23:26.432626    2716 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0415 19:23:26.433375    2716 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0415 19:23:26.434661    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0415 19:23:26.434661    2716 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0415 19:23:26.435196    2716 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0415 19:23:26.435482    2716 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0415 19:23:26.436191    2716 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-841000 san=[127.0.0.1 172.19.62.237 localhost minikube multinode-841000]
	I0415 19:23:26.606364    2716 provision.go:177] copyRemoteCerts
	I0415 19:23:26.624566    2716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 19:23:26.624751    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:23:28.904941    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:23:28.904941    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:28.905164    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:23:31.617898    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:23:31.618859    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:31.619364    2716 sshutil.go:53] new ssh client: &{IP:172.19.62.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000\id_rsa Username:docker}
	I0415 19:23:31.734873    2716 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1102236s)
	I0415 19:23:31.734873    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0415 19:23:31.735397    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0415 19:23:31.782254    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0415 19:23:31.782254    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0415 19:23:31.833786    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0415 19:23:31.834213    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 19:23:31.886045    2716 provision.go:87] duration metric: took 15.5525044s to configureAuth
	I0415 19:23:31.886045    2716 buildroot.go:189] setting minikube options for container-runtime
	I0415 19:23:31.886045    2716 config.go:182] Loaded profile config "multinode-841000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 19:23:31.886045    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:23:34.173666    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:23:34.173666    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:34.174196    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:23:36.901568    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:23:36.901568    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:36.908454    2716 main.go:141] libmachine: Using SSH client type: native
	I0415 19:23:36.909009    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.62.237 22 <nil> <nil>}
	I0415 19:23:36.909009    2716 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 19:23:37.043553    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0415 19:23:37.043553    2716 buildroot.go:70] root file system type: tmpfs
	I0415 19:23:37.044795    2716 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 19:23:37.044853    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:23:39.390858    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:23:39.390858    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:39.390858    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:23:42.117119    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:23:42.117119    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:42.123747    2716 main.go:141] libmachine: Using SSH client type: native
	I0415 19:23:42.124412    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.62.237 22 <nil> <nil>}
	I0415 19:23:42.124412    2716 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 19:23:42.288192    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 19:23:42.288192    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:23:44.574143    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:23:44.574223    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:44.574223    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:23:47.288812    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:23:47.288901    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:47.296301    2716 main.go:141] libmachine: Using SSH client type: native
	I0415 19:23:47.296301    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.62.237 22 <nil> <nil>}
	I0415 19:23:47.296843    2716 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 19:23:49.504243    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0415 19:23:49.504366    2716 machine.go:97] duration metric: took 48.7867254s to provisionDockerMachine
	I0415 19:23:49.504366    2716 client.go:171] duration metric: took 2m3.5134387s to LocalClient.Create
	I0415 19:23:49.504470    2716 start.go:167] duration metric: took 2m3.5135432s to libmachine.API.Create "multinode-841000"
	I0415 19:23:49.504470    2716 start.go:293] postStartSetup for "multinode-841000" (driver="hyperv")
	I0415 19:23:49.504470    2716 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 19:23:49.520859    2716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 19:23:49.520859    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:23:51.801117    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:23:51.801117    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:51.801588    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:23:54.521952    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:23:54.522967    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:54.523203    2716 sshutil.go:53] new ssh client: &{IP:172.19.62.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000\id_rsa Username:docker}
	I0415 19:23:54.623567    2716 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1026675s)
	I0415 19:23:54.637343    2716 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 19:23:54.644876    2716 command_runner.go:130] > NAME=Buildroot
	I0415 19:23:54.644876    2716 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0415 19:23:54.644876    2716 command_runner.go:130] > ID=buildroot
	I0415 19:23:54.644876    2716 command_runner.go:130] > VERSION_ID=2023.02.9
	I0415 19:23:54.644876    2716 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0415 19:23:54.644876    2716 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 19:23:54.644876    2716 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0415 19:23:54.644876    2716 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0415 19:23:54.645628    2716 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> 112722.pem in /etc/ssl/certs
	I0415 19:23:54.646220    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /etc/ssl/certs/112722.pem
	I0415 19:23:54.661264    2716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 19:23:54.682170    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /etc/ssl/certs/112722.pem (1708 bytes)
	I0415 19:23:54.734939    2716 start.go:296] duration metric: took 5.2304269s for postStartSetup
	I0415 19:23:54.737722    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:23:57.068109    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:23:57.068185    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:57.068185    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:23:59.799044    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:23:59.799946    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:23:59.800224    2716 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\config.json ...
	I0415 19:23:59.803102    2716 start.go:128] duration metric: took 2m13.817512s to createHost
	I0415 19:23:59.803276    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:24:02.096327    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:24:02.096327    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:24:02.096327    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:24:04.805308    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:24:04.806268    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:24:04.813051    2716 main.go:141] libmachine: Using SSH client type: native
	I0415 19:24:04.813129    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.62.237 22 <nil> <nil>}
	I0415 19:24:04.813129    2716 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0415 19:24:04.940901    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713209044.943326180
	
	I0415 19:24:04.940901    2716 fix.go:216] guest clock: 1713209044.943326180
	I0415 19:24:04.940901    2716 fix.go:229] Guest: 2024-04-15 19:24:04.94332618 +0000 UTC Remote: 2024-04-15 19:23:59.8032762 +0000 UTC m=+139.927639801 (delta=5.14004998s)
	I0415 19:24:04.940901    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:24:07.241084    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:24:07.241084    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:24:07.242015    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:24:09.986742    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:24:09.987361    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:24:09.995123    2716 main.go:141] libmachine: Using SSH client type: native
	I0415 19:24:09.995273    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.62.237 22 <nil> <nil>}
	I0415 19:24:09.995273    2716 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713209044
	I0415 19:24:10.135381    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 15 19:24:04 UTC 2024
	
	I0415 19:24:10.135381    2716 fix.go:236] clock set: Mon Apr 15 19:24:04 UTC 2024
	 (err=<nil>)
	I0415 19:24:10.135381    2716 start.go:83] releasing machines lock for "multinode-841000", held for 2m24.1501407s
	I0415 19:24:10.136902    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:24:12.459639    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:24:12.460633    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:24:12.460664    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:24:15.172259    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:24:15.173272    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:24:15.180412    2716 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 19:24:15.180987    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:24:15.190138    2716 ssh_runner.go:195] Run: cat /version.json
	I0415 19:24:15.190138    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:24:17.537066    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:24:17.537066    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:24:17.537066    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:24:17.537066    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:24:17.537066    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:24:17.537066    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:24:20.393652    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:24:20.393840    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:24:20.394376    2716 sshutil.go:53] new ssh client: &{IP:172.19.62.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000\id_rsa Username:docker}
	I0415 19:24:20.415460    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:24:20.415460    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:24:20.416459    2716 sshutil.go:53] new ssh client: &{IP:172.19.62.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000\id_rsa Username:docker}
	I0415 19:24:20.485351    2716 command_runner.go:130] > {"iso_version": "v1.33.0-1713175573-18634", "kicbase_version": "v0.0.43-1712854342-18621", "minikube_version": "v1.33.0-beta.0", "commit": "0ece0b4c602cbaab0821f0ba2d6ec4a07a392655"}
	I0415 19:24:20.486127    2716 ssh_runner.go:235] Completed: cat /version.json: (5.2959463s)
	I0415 19:24:20.502254    2716 ssh_runner.go:195] Run: systemctl --version
	I0415 19:24:20.616612    2716 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0415 19:24:20.617582    2716 command_runner.go:130] > systemd 252 (252)
	I0415 19:24:20.617639    2716 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.4371832s)
	I0415 19:24:20.617639    2716 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0415 19:24:20.632758    2716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0415 19:24:20.642157    2716 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0415 19:24:20.642777    2716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 19:24:20.657119    2716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 19:24:20.695026    2716 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0415 19:24:20.695026    2716 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 19:24:20.695026    2716 start.go:494] detecting cgroup driver to use...
	I0415 19:24:20.695434    2716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 19:24:20.737399    2716 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0415 19:24:20.753174    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0415 19:24:20.792906    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 19:24:20.818871    2716 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 19:24:20.832725    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 19:24:20.872440    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 19:24:20.910360    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 19:24:20.947615    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 19:24:20.986546    2716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 19:24:21.028398    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 19:24:21.065788    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 19:24:21.102214    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 19:24:21.139167    2716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 19:24:21.159689    2716 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0415 19:24:21.172969    2716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 19:24:21.220870    2716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:24:21.471026    2716 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 19:24:21.507347    2716 start.go:494] detecting cgroup driver to use...
	I0415 19:24:21.522730    2716 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0415 19:24:21.548976    2716 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0415 19:24:21.549019    2716 command_runner.go:130] > [Unit]
	I0415 19:24:21.549237    2716 command_runner.go:130] > Description=Docker Application Container Engine
	I0415 19:24:21.549237    2716 command_runner.go:130] > Documentation=https://docs.docker.com
	I0415 19:24:21.549318    2716 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0415 19:24:21.549318    2716 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0415 19:24:21.549318    2716 command_runner.go:130] > StartLimitBurst=3
	I0415 19:24:21.549351    2716 command_runner.go:130] > StartLimitIntervalSec=60
	I0415 19:24:21.549351    2716 command_runner.go:130] > [Service]
	I0415 19:24:21.549351    2716 command_runner.go:130] > Type=notify
	I0415 19:24:21.549351    2716 command_runner.go:130] > Restart=on-failure
	I0415 19:24:21.549394    2716 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0415 19:24:21.549394    2716 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0415 19:24:21.549394    2716 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0415 19:24:21.549438    2716 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0415 19:24:21.549438    2716 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0415 19:24:21.549438    2716 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0415 19:24:21.549438    2716 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0415 19:24:21.549482    2716 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0415 19:24:21.549514    2716 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0415 19:24:21.549514    2716 command_runner.go:130] > ExecStart=
	I0415 19:24:21.549514    2716 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0415 19:24:21.549566    2716 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0415 19:24:21.549566    2716 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0415 19:24:21.549599    2716 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0415 19:24:21.549599    2716 command_runner.go:130] > LimitNOFILE=infinity
	I0415 19:24:21.549599    2716 command_runner.go:130] > LimitNPROC=infinity
	I0415 19:24:21.549599    2716 command_runner.go:130] > LimitCORE=infinity
	I0415 19:24:21.549599    2716 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0415 19:24:21.549650    2716 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0415 19:24:21.549650    2716 command_runner.go:130] > TasksMax=infinity
	I0415 19:24:21.549759    2716 command_runner.go:130] > TimeoutStartSec=0
	I0415 19:24:21.549759    2716 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0415 19:24:21.549789    2716 command_runner.go:130] > Delegate=yes
	I0415 19:24:21.549789    2716 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0415 19:24:21.549789    2716 command_runner.go:130] > KillMode=process
	I0415 19:24:21.549789    2716 command_runner.go:130] > [Install]
	I0415 19:24:21.549789    2716 command_runner.go:130] > WantedBy=multi-user.target
	I0415 19:24:21.565085    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 19:24:21.604369    2716 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 19:24:21.664841    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 19:24:21.705768    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 19:24:21.749870    2716 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0415 19:24:21.828417    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 19:24:21.856107    2716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 19:24:21.900812    2716 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0415 19:24:21.917441    2716 ssh_runner.go:195] Run: which cri-dockerd
	I0415 19:24:21.922526    2716 command_runner.go:130] > /usr/bin/cri-dockerd
	I0415 19:24:21.942272    2716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0415 19:24:21.963109    2716 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0415 19:24:22.016256    2716 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0415 19:24:22.264137    2716 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0415 19:24:22.471570    2716 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0415 19:24:22.471791    2716 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0415 19:24:22.521250    2716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:24:22.753115    2716 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 19:24:25.368075    2716 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6149387s)
	I0415 19:24:25.383895    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0415 19:24:25.430295    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 19:24:25.470646    2716 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0415 19:24:25.720400    2716 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0415 19:24:25.955694    2716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:24:26.170379    2716 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0415 19:24:26.222387    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 19:24:26.266389    2716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:24:26.484304    2716 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0415 19:24:26.617534    2716 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0415 19:24:26.632318    2716 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0415 19:24:26.645257    2716 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0415 19:24:26.645347    2716 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0415 19:24:26.645386    2716 command_runner.go:130] > Device: 0,22	Inode: 883         Links: 1
	I0415 19:24:26.645386    2716 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0415 19:24:26.645427    2716 command_runner.go:130] > Access: 2024-04-15 19:24:26.512175974 +0000
	I0415 19:24:26.645427    2716 command_runner.go:130] > Modify: 2024-04-15 19:24:26.512175974 +0000
	I0415 19:24:26.645427    2716 command_runner.go:130] > Change: 2024-04-15 19:24:26.518175974 +0000
	I0415 19:24:26.645486    2716 command_runner.go:130] >  Birth: -
	I0415 19:24:26.645516    2716 start.go:562] Will wait 60s for crictl version
	I0415 19:24:26.660095    2716 ssh_runner.go:195] Run: which crictl
	I0415 19:24:26.666687    2716 command_runner.go:130] > /usr/bin/crictl
	I0415 19:24:26.682347    2716 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0415 19:24:26.740095    2716 command_runner.go:130] > Version:  0.1.0
	I0415 19:24:26.740252    2716 command_runner.go:130] > RuntimeName:  docker
	I0415 19:24:26.740252    2716 command_runner.go:130] > RuntimeVersion:  26.0.0
	I0415 19:24:26.740252    2716 command_runner.go:130] > RuntimeApiVersion:  v1
	I0415 19:24:26.740312    2716 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0415 19:24:26.752687    2716 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 19:24:26.784767    2716 command_runner.go:130] > 26.0.0
	I0415 19:24:26.795245    2716 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 19:24:26.831249    2716 command_runner.go:130] > 26.0.0
	I0415 19:24:26.835318    2716 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0415 19:24:26.835318    2716 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0415 19:24:26.839257    2716 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0415 19:24:26.839257    2716 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0415 19:24:26.839257    2716 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0415 19:24:26.839257    2716 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:d7:0e Flags:up|broadcast|multicast|running}
	I0415 19:24:26.842299    2716 ip.go:210] interface addr: fe80::6b0:6318:bc6e:fcda/64
	I0415 19:24:26.842299    2716 ip.go:210] interface addr: 172.19.48.1/20
	I0415 19:24:26.856293    2716 ssh_runner.go:195] Run: grep 172.19.48.1	host.minikube.internal$ /etc/hosts
	I0415 19:24:26.862967    2716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 19:24:26.886634    2716 kubeadm.go:877] updating cluster {Name:multinode-841000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.29.3 ClusterName:multinode-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.62.237 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0415 19:24:26.886634    2716 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 19:24:26.897311    2716 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 19:24:26.918394    2716 docker.go:685] Got preloaded images: 
	I0415 19:24:26.918394    2716 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0415 19:24:26.934063    2716 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0415 19:24:26.954847    2716 command_runner.go:139] > {"Repositories":{}}
	I0415 19:24:26.968826    2716 ssh_runner.go:195] Run: which lz4
	I0415 19:24:26.974832    2716 command_runner.go:130] > /usr/bin/lz4
	I0415 19:24:26.975257    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0415 19:24:26.990290    2716 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0415 19:24:26.996873    2716 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0415 19:24:26.997882    2716 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0415 19:24:26.997918    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0415 19:24:28.862867    2716 docker.go:649] duration metric: took 1.8872924s to copy over tarball
	I0415 19:24:28.876645    2716 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0415 19:24:38.097671    2716 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (9.2209517s)
	I0415 19:24:38.097807    2716 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0415 19:24:38.169465    2716 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0415 19:24:38.190723    2716 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.29.3":"sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533","registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c":"sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.29.3":"sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3","registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104":"sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.29.3":"sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392","registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863":"sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b
5bbe4f71784e392"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.29.3":"sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b","registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a":"sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0415 19:24:38.190723    2716 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0415 19:24:38.242447    2716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:24:38.473882    2716 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 19:24:41.419870    2716 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.9458914s)
	I0415 19:24:41.431373    2716 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 19:24:41.458820    2716 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.29.3
	I0415 19:24:41.459910    2716 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.29.3
	I0415 19:24:41.459946    2716 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.29.3
	I0415 19:24:41.459946    2716 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.29.3
	I0415 19:24:41.459946    2716 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0415 19:24:41.459946    2716 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0415 19:24:41.459983    2716 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0415 19:24:41.459983    2716 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 19:24:41.460057    2716 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0415 19:24:41.460057    2716 cache_images.go:84] Images are preloaded, skipping loading
	I0415 19:24:41.460128    2716 kubeadm.go:928] updating node { 172.19.62.237 8443 v1.29.3 docker true true} ...
	I0415 19:24:41.460249    2716 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-841000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.62.237
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0415 19:24:41.473102    2716 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0415 19:24:41.510548    2716 command_runner.go:130] > cgroupfs
	I0415 19:24:41.511686    2716 cni.go:84] Creating CNI manager for ""
	I0415 19:24:41.511686    2716 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 19:24:41.512255    2716 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0415 19:24:41.512337    2716 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.62.237 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-841000 NodeName:multinode-841000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.62.237"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.62.237 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0415 19:24:41.512422    2716 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.62.237
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-841000"
	  kubeletExtraArgs:
	    node-ip: 172.19.62.237
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.62.237"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0415 19:24:41.528710    2716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0415 19:24:41.548458    2716 command_runner.go:130] > kubeadm
	I0415 19:24:41.548458    2716 command_runner.go:130] > kubectl
	I0415 19:24:41.548458    2716 command_runner.go:130] > kubelet
	I0415 19:24:41.549449    2716 binaries.go:44] Found k8s binaries, skipping transfer
	I0415 19:24:41.563965    2716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0415 19:24:41.584446    2716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0415 19:24:41.620687    2716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0415 19:24:41.652366    2716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0415 19:24:41.698439    2716 ssh_runner.go:195] Run: grep 172.19.62.237	control-plane.minikube.internal$ /etc/hosts
	I0415 19:24:41.706003    2716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.62.237	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 19:24:41.741133    2716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:24:41.953381    2716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 19:24:41.984994    2716 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000 for IP: 172.19.62.237
	I0415 19:24:41.985165    2716 certs.go:194] generating shared ca certs ...
	I0415 19:24:41.985237    2716 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 19:24:41.985532    2716 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0415 19:24:41.986378    2716 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0415 19:24:41.986378    2716 certs.go:256] generating profile certs ...
	I0415 19:24:41.987364    2716 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\client.key
	I0415 19:24:41.987392    2716 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\client.crt with IP's: []
	I0415 19:24:42.229916    2716 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\client.crt ...
	I0415 19:24:42.229916    2716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\client.crt: {Name:mk9badea2ff5b569dc09e71a8f795bea7c9e1356 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 19:24:42.231015    2716 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\client.key ...
	I0415 19:24:42.231015    2716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\client.key: {Name:mke4cb8007f3a005256b61c64568ce8d40a62426 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 19:24:42.233103    2716 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\apiserver.key.54490593
	I0415 19:24:42.233103    2716 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\apiserver.crt.54490593 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.62.237]
	I0415 19:24:42.389686    2716 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\apiserver.crt.54490593 ...
	I0415 19:24:42.389686    2716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\apiserver.crt.54490593: {Name:mk6e140699b78be59c9bc5f199ee895595487b6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 19:24:42.390692    2716 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\apiserver.key.54490593 ...
	I0415 19:24:42.390692    2716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\apiserver.key.54490593: {Name:mk727d6acd2006bf70a4f4c8c4e152752ee2e9af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 19:24:42.391689    2716 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\apiserver.crt.54490593 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\apiserver.crt
	I0415 19:24:42.406617    2716 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\apiserver.key.54490593 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\apiserver.key
	I0415 19:24:42.407577    2716 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\proxy-client.key
	I0415 19:24:42.408589    2716 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\proxy-client.crt with IP's: []
	I0415 19:24:42.537552    2716 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\proxy-client.crt ...
	I0415 19:24:42.537552    2716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\proxy-client.crt: {Name:mkf3e1e5f690513401ff7fb344202eb4abdc6cd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 19:24:42.538558    2716 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\proxy-client.key ...
	I0415 19:24:42.538558    2716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\proxy-client.key: {Name:mke8ee9fca7dffdeb19815e1840285da7eb6d959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 19:24:42.540522    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0415 19:24:42.540748    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0415 19:24:42.540949    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0415 19:24:42.541087    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0415 19:24:42.541350    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0415 19:24:42.541532    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0415 19:24:42.541686    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0415 19:24:42.550928    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0415 19:24:42.551791    2716 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem (1338 bytes)
	W0415 19:24:42.552467    2716 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272_empty.pem, impossibly tiny 0 bytes
	I0415 19:24:42.552604    2716 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0415 19:24:42.552772    2716 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0415 19:24:42.553087    2716 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0415 19:24:42.553343    2716 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0415 19:24:42.553634    2716 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem (1708 bytes)
	I0415 19:24:42.553634    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0415 19:24:42.554245    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem -> /usr/share/ca-certificates/11272.pem
	I0415 19:24:42.554415    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /usr/share/ca-certificates/112722.pem
	I0415 19:24:42.554627    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0415 19:24:42.612957    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0415 19:24:42.669189    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0415 19:24:42.718283    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0415 19:24:42.769205    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0415 19:24:42.825934    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0415 19:24:42.878537    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0415 19:24:42.932734    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0415 19:24:42.983772    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0415 19:24:43.038842    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem --> /usr/share/ca-certificates/11272.pem (1338 bytes)
	I0415 19:24:43.093027    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /usr/share/ca-certificates/112722.pem (1708 bytes)
	I0415 19:24:43.146498    2716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0415 19:24:43.196794    2716 ssh_runner.go:195] Run: openssl version
	I0415 19:24:43.208699    2716 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0415 19:24:43.223855    2716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11272.pem && ln -fs /usr/share/ca-certificates/11272.pem /etc/ssl/certs/11272.pem"
	I0415 19:24:43.261989    2716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11272.pem
	I0415 19:24:43.271207    2716 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 15 17:58 /usr/share/ca-certificates/11272.pem
	I0415 19:24:43.271407    2716 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 17:58 /usr/share/ca-certificates/11272.pem
	I0415 19:24:43.286785    2716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11272.pem
	I0415 19:24:43.297352    2716 command_runner.go:130] > 51391683
	I0415 19:24:43.312704    2716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11272.pem /etc/ssl/certs/51391683.0"
	I0415 19:24:43.350586    2716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112722.pem && ln -fs /usr/share/ca-certificates/112722.pem /etc/ssl/certs/112722.pem"
	I0415 19:24:43.386359    2716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112722.pem
	I0415 19:24:43.393930    2716 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 15 17:58 /usr/share/ca-certificates/112722.pem
	I0415 19:24:43.394014    2716 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 17:58 /usr/share/ca-certificates/112722.pem
	I0415 19:24:43.407956    2716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112722.pem
	I0415 19:24:43.417519    2716 command_runner.go:130] > 3ec20f2e
	I0415 19:24:43.432861    2716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112722.pem /etc/ssl/certs/3ec20f2e.0"
	I0415 19:24:43.469609    2716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0415 19:24:43.504602    2716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0415 19:24:43.514294    2716 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 15 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I0415 19:24:43.514391    2716 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I0415 19:24:43.527936    2716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0415 19:24:43.538176    2716 command_runner.go:130] > b5213941
	I0415 19:24:43.552348    2716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0415 19:24:43.589245    2716 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 19:24:43.596157    2716 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0415 19:24:43.597158    2716 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0415 19:24:43.597330    2716 kubeadm.go:391] StartCluster: {Name:multinode-841000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
9.3 ClusterName:multinode-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.62.237 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 19:24:43.608544    2716 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0415 19:24:43.648717    2716 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0415 19:24:43.666142    2716 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0415 19:24:43.666142    2716 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0415 19:24:43.666142    2716 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0415 19:24:43.680266    2716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0415 19:24:43.712154    2716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0415 19:24:43.732157    2716 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0415 19:24:43.732157    2716 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0415 19:24:43.732157    2716 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0415 19:24:43.732157    2716 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0415 19:24:43.732157    2716 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0415 19:24:43.732157    2716 kubeadm.go:156] found existing configuration files:
	
	I0415 19:24:43.746148    2716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0415 19:24:43.769233    2716 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0415 19:24:43.769293    2716 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0415 19:24:43.785469    2716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0415 19:24:43.816137    2716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0415 19:24:43.840104    2716 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0415 19:24:43.841116    2716 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0415 19:24:43.854109    2716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0415 19:24:43.883630    2716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0415 19:24:43.898633    2716 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0415 19:24:43.898633    2716 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0415 19:24:43.910627    2716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0415 19:24:43.941576    2716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0415 19:24:43.962066    2716 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0415 19:24:43.962210    2716 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0415 19:24:43.978032    2716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0415 19:24:43.999043    2716 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0415 19:24:44.480901    2716 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0415 19:24:44.480901    2716 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0415 19:24:59.143055    2716 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0415 19:24:59.143171    2716 command_runner.go:130] > [init] Using Kubernetes version: v1.29.3
	I0415 19:24:59.143385    2716 command_runner.go:130] > [preflight] Running pre-flight checks
	I0415 19:24:59.143417    2716 kubeadm.go:309] [preflight] Running pre-flight checks
	I0415 19:24:59.143613    2716 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0415 19:24:59.143613    2716 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0415 19:24:59.143613    2716 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0415 19:24:59.143613    2716 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0415 19:24:59.143613    2716 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0415 19:24:59.143613    2716 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0415 19:24:59.143613    2716 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0415 19:24:59.150477    2716 out.go:204]   - Generating certificates and keys ...
	I0415 19:24:59.143613    2716 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0415 19:24:59.150477    2716 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0415 19:24:59.150477    2716 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0415 19:24:59.150477    2716 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0415 19:24:59.150477    2716 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0415 19:24:59.151072    2716 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0415 19:24:59.151072    2716 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0415 19:24:59.151072    2716 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0415 19:24:59.151072    2716 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0415 19:24:59.151072    2716 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0415 19:24:59.151072    2716 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0415 19:24:59.151072    2716 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0415 19:24:59.151072    2716 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0415 19:24:59.151609    2716 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0415 19:24:59.151726    2716 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0415 19:24:59.151812    2716 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-841000] and IPs [172.19.62.237 127.0.0.1 ::1]
	I0415 19:24:59.151812    2716 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-841000] and IPs [172.19.62.237 127.0.0.1 ::1]
	I0415 19:24:59.151812    2716 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0415 19:24:59.151812    2716 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0415 19:24:59.152342    2716 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-841000] and IPs [172.19.62.237 127.0.0.1 ::1]
	I0415 19:24:59.152342    2716 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-841000] and IPs [172.19.62.237 127.0.0.1 ::1]
	I0415 19:24:59.152492    2716 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0415 19:24:59.152533    2716 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0415 19:24:59.152533    2716 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0415 19:24:59.152533    2716 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0415 19:24:59.152533    2716 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0415 19:24:59.152533    2716 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0415 19:24:59.152533    2716 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0415 19:24:59.152533    2716 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0415 19:24:59.152533    2716 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0415 19:24:59.152533    2716 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0415 19:24:59.152533    2716 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0415 19:24:59.152533    2716 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0415 19:24:59.152533    2716 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0415 19:24:59.152533    2716 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0415 19:24:59.153615    2716 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0415 19:24:59.153677    2716 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0415 19:24:59.153862    2716 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0415 19:24:59.153862    2716 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0415 19:24:59.153905    2716 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0415 19:24:59.154170    2716 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0415 19:24:59.154303    2716 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0415 19:24:59.158501    2716 out.go:204]   - Booting up control plane ...
	I0415 19:24:59.154368    2716 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0415 19:24:59.158829    2716 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0415 19:24:59.158829    2716 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0415 19:24:59.158987    2716 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0415 19:24:59.158987    2716 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0415 19:24:59.159153    2716 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0415 19:24:59.159153    2716 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0415 19:24:59.159444    2716 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0415 19:24:59.159444    2716 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0415 19:24:59.159751    2716 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0415 19:24:59.159792    2716 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0415 19:24:59.159913    2716 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0415 19:24:59.159913    2716 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0415 19:24:59.160369    2716 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0415 19:24:59.160369    2716 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0415 19:24:59.160493    2716 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.509640 seconds
	I0415 19:24:59.160543    2716 kubeadm.go:309] [apiclient] All control plane components are healthy after 8.509640 seconds
	I0415 19:24:59.160689    2716 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0415 19:24:59.160689    2716 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0415 19:24:59.161035    2716 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0415 19:24:59.161035    2716 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0415 19:24:59.161220    2716 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0415 19:24:59.161270    2716 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0415 19:24:59.161710    2716 command_runner.go:130] > [mark-control-plane] Marking the node multinode-841000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0415 19:24:59.161710    2716 kubeadm.go:309] [mark-control-plane] Marking the node multinode-841000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0415 19:24:59.161837    2716 command_runner.go:130] > [bootstrap-token] Using token: j6rchv.u0yc33wyp2zsd69b
	I0415 19:24:59.161837    2716 kubeadm.go:309] [bootstrap-token] Using token: j6rchv.u0yc33wyp2zsd69b
	I0415 19:24:59.164964    2716 out.go:204]   - Configuring RBAC rules ...
	I0415 19:24:59.165049    2716 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0415 19:24:59.165049    2716 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0415 19:24:59.165049    2716 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0415 19:24:59.165049    2716 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0415 19:24:59.165643    2716 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0415 19:24:59.165643    2716 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0415 19:24:59.165963    2716 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0415 19:24:59.165963    2716 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0415 19:24:59.166210    2716 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0415 19:24:59.166274    2716 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0415 19:24:59.166404    2716 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0415 19:24:59.166404    2716 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0415 19:24:59.166404    2716 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0415 19:24:59.166404    2716 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0415 19:24:59.166404    2716 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0415 19:24:59.166404    2716 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0415 19:24:59.167057    2716 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0415 19:24:59.167057    2716 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0415 19:24:59.167057    2716 kubeadm.go:309] 
	I0415 19:24:59.167310    2716 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0415 19:24:59.167310    2716 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0415 19:24:59.167310    2716 kubeadm.go:309] 
	I0415 19:24:59.167310    2716 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0415 19:24:59.167310    2716 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0415 19:24:59.167310    2716 kubeadm.go:309] 
	I0415 19:24:59.167310    2716 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0415 19:24:59.167310    2716 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0415 19:24:59.167310    2716 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0415 19:24:59.167310    2716 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0415 19:24:59.167310    2716 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0415 19:24:59.167310    2716 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0415 19:24:59.167310    2716 kubeadm.go:309] 
	I0415 19:24:59.167310    2716 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0415 19:24:59.167310    2716 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0415 19:24:59.167310    2716 kubeadm.go:309] 
	I0415 19:24:59.167310    2716 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0415 19:24:59.167310    2716 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0415 19:24:59.167310    2716 kubeadm.go:309] 
	I0415 19:24:59.168523    2716 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0415 19:24:59.168585    2716 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0415 19:24:59.168585    2716 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0415 19:24:59.168585    2716 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0415 19:24:59.168585    2716 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0415 19:24:59.168585    2716 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0415 19:24:59.168585    2716 kubeadm.go:309] 
	I0415 19:24:59.168585    2716 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0415 19:24:59.169165    2716 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0415 19:24:59.169465    2716 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0415 19:24:59.169465    2716 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0415 19:24:59.169632    2716 kubeadm.go:309] 
	I0415 19:24:59.169804    2716 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token j6rchv.u0yc33wyp2zsd69b \
	I0415 19:24:59.169804    2716 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token j6rchv.u0yc33wyp2zsd69b \
	I0415 19:24:59.170258    2716 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:64433043496e1f3289bbcb278fdd01bdcea4f93c3a13feee16fa4e3db23f59c2 \
	I0415 19:24:59.170258    2716 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:64433043496e1f3289bbcb278fdd01bdcea4f93c3a13feee16fa4e3db23f59c2 \
	I0415 19:24:59.170421    2716 command_runner.go:130] > 	--control-plane 
	I0415 19:24:59.170421    2716 kubeadm.go:309] 	--control-plane 
	I0415 19:24:59.170421    2716 kubeadm.go:309] 
	I0415 19:24:59.170649    2716 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0415 19:24:59.170721    2716 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0415 19:24:59.170721    2716 kubeadm.go:309] 
	I0415 19:24:59.170917    2716 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token j6rchv.u0yc33wyp2zsd69b \
	I0415 19:24:59.170971    2716 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token j6rchv.u0yc33wyp2zsd69b \
	I0415 19:24:59.171192    2716 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:64433043496e1f3289bbcb278fdd01bdcea4f93c3a13feee16fa4e3db23f59c2 
	I0415 19:24:59.171247    2716 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:64433043496e1f3289bbcb278fdd01bdcea4f93c3a13feee16fa4e3db23f59c2 
	I0415 19:24:59.171301    2716 cni.go:84] Creating CNI manager for ""
	I0415 19:24:59.171301    2716 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 19:24:59.174631    2716 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0415 19:24:59.195921    2716 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0415 19:24:59.215514    2716 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0415 19:24:59.215514    2716 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0415 19:24:59.215514    2716 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0415 19:24:59.215514    2716 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0415 19:24:59.215514    2716 command_runner.go:130] > Access: 2024-04-15 19:22:55.200417200 +0000
	I0415 19:24:59.215514    2716 command_runner.go:130] > Modify: 2024-04-15 15:49:28.000000000 +0000
	I0415 19:24:59.215514    2716 command_runner.go:130] > Change: 2024-04-15 19:22:45.121000000 +0000
	I0415 19:24:59.215514    2716 command_runner.go:130] >  Birth: -
	I0415 19:24:59.215514    2716 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0415 19:24:59.215514    2716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0415 19:24:59.343662    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0415 19:24:59.991329    2716 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0415 19:24:59.991329    2716 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0415 19:24:59.991329    2716 command_runner.go:130] > serviceaccount/kindnet created
	I0415 19:24:59.991435    2716 command_runner.go:130] > daemonset.apps/kindnet created
	I0415 19:24:59.991493    2716 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0415 19:25:00.009041    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:00.009041    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-841000 minikube.k8s.io/updated_at=2024_04_15T19_24_59_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c minikube.k8s.io/name=multinode-841000 minikube.k8s.io/primary=true
	I0415 19:25:00.020479    2716 command_runner.go:130] > -16
	I0415 19:25:00.021009    2716 ops.go:34] apiserver oom_adj: -16
	I0415 19:25:00.248150    2716 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0415 19:25:00.248330    2716 command_runner.go:130] > node/multinode-841000 labeled
	I0415 19:25:00.262597    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:00.429221    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:00.765484    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:00.889898    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:01.267175    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:01.378983    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:01.772401    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:01.892267    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:02.270240    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:02.393761    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:02.775868    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:02.893256    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:03.265864    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:03.386288    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:03.762648    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:03.900208    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:04.267634    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:04.391779    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:04.771539    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:04.890716    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:05.274414    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:05.387287    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:05.774652    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:05.911595    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:06.277552    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:06.394174    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:06.778143    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:06.899343    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:07.266888    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:07.382375    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:07.768664    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:07.884965    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:08.271818    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:08.384778    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:08.778983    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:08.897051    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:09.277417    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:09.426507    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:09.764842    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:09.885394    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:10.273745    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:10.408407    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:10.761869    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:10.912098    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:11.274697    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:11.394198    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:11.762370    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:11.949404    2716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0415 19:25:12.271315    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 19:25:12.430297    2716 command_runner.go:130] > NAME      SECRETS   AGE
	I0415 19:25:12.431310    2716 command_runner.go:130] > default   0         0s
	I0415 19:25:12.431391    2716 kubeadm.go:1107] duration metric: took 12.4397971s to wait for elevateKubeSystemPrivileges
	W0415 19:25:12.431391    2716 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0415 19:25:12.431391    2716 kubeadm.go:393] duration metric: took 28.8338277s to StartCluster
	I0415 19:25:12.431391    2716 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 19:25:12.431753    2716 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 19:25:12.432993    2716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 19:25:12.434702    2716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0415 19:25:12.434804    2716 start.go:234] Will wait 6m0s for node &{Name: IP:172.19.62.237 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 19:25:12.435141    2716 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0415 19:25:12.437465    2716 addons.go:69] Setting storage-provisioner=true in profile "multinode-841000"
	I0415 19:25:12.435625    2716 config.go:182] Loaded profile config "multinode-841000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 19:25:12.437465    2716 out.go:177] * Verifying Kubernetes components...
	I0415 19:25:12.437594    2716 addons.go:234] Setting addon storage-provisioner=true in "multinode-841000"
	I0415 19:25:12.437624    2716 addons.go:69] Setting default-storageclass=true in profile "multinode-841000"
	I0415 19:25:12.441073    2716 host.go:66] Checking if "multinode-841000" exists ...
	I0415 19:25:12.441073    2716 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-841000"
	I0415 19:25:12.442465    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:25:12.442731    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:25:12.457069    2716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:25:12.720193    2716 command_runner.go:130] > apiVersion: v1
	I0415 19:25:12.721198    2716 command_runner.go:130] > data:
	I0415 19:25:12.721198    2716 command_runner.go:130] >   Corefile: |
	I0415 19:25:12.721198    2716 command_runner.go:130] >     .:53 {
	I0415 19:25:12.721198    2716 command_runner.go:130] >         errors
	I0415 19:25:12.721198    2716 command_runner.go:130] >         health {
	I0415 19:25:12.721198    2716 command_runner.go:130] >            lameduck 5s
	I0415 19:25:12.721198    2716 command_runner.go:130] >         }
	I0415 19:25:12.721198    2716 command_runner.go:130] >         ready
	I0415 19:25:12.721198    2716 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0415 19:25:12.721198    2716 command_runner.go:130] >            pods insecure
	I0415 19:25:12.721198    2716 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0415 19:25:12.721198    2716 command_runner.go:130] >            ttl 30
	I0415 19:25:12.721198    2716 command_runner.go:130] >         }
	I0415 19:25:12.721198    2716 command_runner.go:130] >         prometheus :9153
	I0415 19:25:12.721198    2716 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0415 19:25:12.721198    2716 command_runner.go:130] >            max_concurrent 1000
	I0415 19:25:12.721198    2716 command_runner.go:130] >         }
	I0415 19:25:12.721198    2716 command_runner.go:130] >         cache 30
	I0415 19:25:12.721198    2716 command_runner.go:130] >         loop
	I0415 19:25:12.721198    2716 command_runner.go:130] >         reload
	I0415 19:25:12.721198    2716 command_runner.go:130] >         loadbalance
	I0415 19:25:12.721198    2716 command_runner.go:130] >     }
	I0415 19:25:12.721198    2716 command_runner.go:130] > kind: ConfigMap
	I0415 19:25:12.721198    2716 command_runner.go:130] > metadata:
	I0415 19:25:12.721198    2716 command_runner.go:130] >   creationTimestamp: "2024-04-15T19:24:58Z"
	I0415 19:25:12.721198    2716 command_runner.go:130] >   name: coredns
	I0415 19:25:12.721198    2716 command_runner.go:130] >   namespace: kube-system
	I0415 19:25:12.721198    2716 command_runner.go:130] >   resourceVersion: "271"
	I0415 19:25:12.721198    2716 command_runner.go:130] >   uid: 8d1ff511-93dc-4477-8bf3-bdcc02b55248
	I0415 19:25:12.723200    2716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0415 19:25:12.820058    2716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 19:25:13.162482    2716 command_runner.go:130] > configmap/coredns replaced
	I0415 19:25:13.162482    2716 start.go:946] {"host.minikube.internal": 172.19.48.1} host record injected into CoreDNS's ConfigMap
	I0415 19:25:13.163882    2716 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 19:25:13.163969    2716 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 19:25:13.164710    2716 kapi.go:59] client config for multinode-841000: &rest.Config{Host:"https://172.19.62.237:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-841000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-841000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f71600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0415 19:25:13.164710    2716 kapi.go:59] client config for multinode-841000: &rest.Config{Host:"https://172.19.62.237:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-841000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-841000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f71600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0415 19:25:13.166414    2716 cert_rotation.go:137] Starting client certificate rotation controller
	I0415 19:25:13.166662    2716 node_ready.go:35] waiting up to 6m0s for node "multinode-841000" to be "Ready" ...
	I0415 19:25:13.166662    2716 round_trippers.go:463] GET https://172.19.62.237:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0415 19:25:13.166662    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:13.166662    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:13.166662    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:13.166662    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:13.166662    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:13.166662    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:13.166662    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:13.184653    2716 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0415 19:25:13.185310    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:13.185310    2716 round_trippers.go:580]     Audit-Id: bb6b9523-4d8a-4957-ae71-f1e090ac09c3
	I0415 19:25:13.185425    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:13.185425    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:13.185467    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:13.185541    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:13.185310    2716 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0415 19:25:13.185541    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:13 GMT
	I0415 19:25:13.185741    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:13.185844    2716 round_trippers.go:580]     Audit-Id: 2ec9d5fc-76f8-40ba-b04a-d698081275a9
	I0415 19:25:13.185902    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:13.185902    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:13.185942    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:13.185942    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:13.185942    2716 round_trippers.go:580]     Content-Length: 291
	I0415 19:25:13.185942    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:13 GMT
	I0415 19:25:13.185942    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:13.185942    2716 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"4df20018-f3d8-466e-bf64-841fb958db45","resourceVersion":"380","creationTimestamp":"2024-04-15T19:24:58Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0415 19:25:13.186903    2716 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"4df20018-f3d8-466e-bf64-841fb958db45","resourceVersion":"380","creationTimestamp":"2024-04-15T19:24:58Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0415 19:25:13.186998    2716 round_trippers.go:463] PUT https://172.19.62.237:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0415 19:25:13.187139    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:13.187168    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:13.187168    2716 round_trippers.go:473]     Content-Type: application/json
	I0415 19:25:13.187168    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:13.213841    2716 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0415 19:25:13.213909    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:13.213909    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:13 GMT
	I0415 19:25:13.213978    2716 round_trippers.go:580]     Audit-Id: f6b375a1-2157-490f-a913-b3265531fe86
	I0415 19:25:13.213978    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:13.213978    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:13.213978    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:13.213978    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:13.214042    2716 round_trippers.go:580]     Content-Length: 291
	I0415 19:25:13.214042    2716 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"4df20018-f3d8-466e-bf64-841fb958db45","resourceVersion":"395","creationTimestamp":"2024-04-15T19:24:58Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0415 19:25:13.676776    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:13.676944    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:13.677025    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:13.677025    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:13.677025    2716 round_trippers.go:463] GET https://172.19.62.237:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0415 19:25:13.677025    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:13.677025    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:13.677025    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:13.682592    2716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 19:25:13.682592    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:13.682592    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:13.682592    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:13.682592    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:13.682592    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:13 GMT
	I0415 19:25:13.682592    2716 round_trippers.go:580]     Audit-Id: 6cc27bdc-c711-4449-b866-bf5b9fafd3d0
	I0415 19:25:13.682592    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:13.683343    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:13.685590    2716 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0415 19:25:13.685590    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:13.685590    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:13 GMT
	I0415 19:25:13.685590    2716 round_trippers.go:580]     Audit-Id: 0b0f90a5-166f-495e-bc27-91cf5df4e81e
	I0415 19:25:13.685590    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:13.685590    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:13.685590    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:13.685590    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:13.685590    2716 round_trippers.go:580]     Content-Length: 291
	I0415 19:25:13.685590    2716 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"4df20018-f3d8-466e-bf64-841fb958db45","resourceVersion":"406","creationTimestamp":"2024-04-15T19:24:58Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0415 19:25:13.685590    2716 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-841000" context rescaled to 1 replicas
	I0415 19:25:14.168541    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:14.168541    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:14.168541    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:14.168541    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:14.172541    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:25:14.173026    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:14.173084    2716 round_trippers.go:580]     Audit-Id: f87ad259-4fcc-4e84-9573-ccfa435000f3
	I0415 19:25:14.173084    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:14.173137    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:14.173137    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:14.173137    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:14.173137    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:14 GMT
	I0415 19:25:14.173137    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:14.674384    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:14.674384    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:14.674529    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:14.674529    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:14.677762    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:14.678495    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:14.678495    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:14 GMT
	I0415 19:25:14.678495    2716 round_trippers.go:580]     Audit-Id: f19a0b63-15f0-446f-87a6-aa46e4e2ab0a
	I0415 19:25:14.678495    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:14.678495    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:14.678495    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:14.678495    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:14.678782    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:14.874203    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:25:14.874203    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:14.876058    2716 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 19:25:14.876434    2716 kapi.go:59] client config for multinode-841000: &rest.Config{Host:"https://172.19.62.237:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-841000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-841000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f71600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0415 19:25:14.877232    2716 addons.go:234] Setting addon default-storageclass=true in "multinode-841000"
	I0415 19:25:14.878077    2716 host.go:66] Checking if "multinode-841000" exists ...
	I0415 19:25:14.879209    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:25:14.883387    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:25:14.883387    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:14.886293    2716 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 19:25:14.889482    2716 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 19:25:14.889482    2716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0415 19:25:14.889482    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:25:15.180652    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:15.180652    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:15.180652    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:15.180652    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:15.186711    2716 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 19:25:15.187243    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:15.187243    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:15.187243    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:15.187243    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:15.187243    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:15 GMT
	I0415 19:25:15.187243    2716 round_trippers.go:580]     Audit-Id: 56c251e5-ac69-44d3-bc08-8fec6a99784f
	I0415 19:25:15.187243    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:15.187860    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:15.188761    2716 node_ready.go:53] node "multinode-841000" has status "Ready":"False"
	I0415 19:25:15.672694    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:15.672822    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:15.672822    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:15.672822    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:15.687150    2716 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0415 19:25:15.687150    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:15.687242    2716 round_trippers.go:580]     Audit-Id: 064c75b2-c449-479c-bc0f-bf148099d815
	I0415 19:25:15.687242    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:15.687242    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:15.687242    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:15.687242    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:15.687242    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:15 GMT
	I0415 19:25:15.687313    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:16.182019    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:16.182078    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:16.182186    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:16.182186    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:16.185527    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:16.186356    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:16.186356    2716 round_trippers.go:580]     Audit-Id: dcaf4b48-25a5-4d75-b7e4-51a61a6cd5fb
	I0415 19:25:16.186356    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:16.186422    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:16.186422    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:16.186422    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:16.186422    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:16 GMT
	I0415 19:25:16.186422    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:16.674058    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:16.674119    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:16.674119    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:16.674119    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:16.677551    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:16.678518    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:16.678518    2716 round_trippers.go:580]     Audit-Id: edbd2427-b881-4d80-8f4e-f7dfbba489dd
	I0415 19:25:16.678518    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:16.678518    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:16.678570    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:16.678570    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:16.678570    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:16 GMT
	I0415 19:25:16.678917    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:17.166733    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:17.166733    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:17.166733    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:17.166733    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:17.171392    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:25:17.171454    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:17.171454    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:17.171454    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:17.171454    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:17.171454    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:17 GMT
	I0415 19:25:17.171454    2716 round_trippers.go:580]     Audit-Id: 7be2eb4f-947d-45ea-90e3-05a60ca446bc
	I0415 19:25:17.171454    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:17.172026    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:17.368048    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:25:17.368048    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:17.368918    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:25:17.490902    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:25:17.490902    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:17.491910    2716 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0415 19:25:17.491910    2716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0415 19:25:17.491910    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:25:17.673369    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:17.673441    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:17.673441    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:17.673441    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:17.678006    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:25:17.678006    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:17.678006    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:17.678091    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:17 GMT
	I0415 19:25:17.678091    2716 round_trippers.go:580]     Audit-Id: 766366c7-dddc-4a76-9d73-ce49b7182b44
	I0415 19:25:17.678091    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:17.678091    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:17.678091    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:17.678487    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:17.679126    2716 node_ready.go:53] node "multinode-841000" has status "Ready":"False"
	I0415 19:25:18.182494    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:18.182902    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:18.182970    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:18.182970    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:18.188090    2716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 19:25:18.188090    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:18.188090    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:18.188347    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:18.188347    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:18.188347    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:18.188347    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:18 GMT
	I0415 19:25:18.188347    2716 round_trippers.go:580]     Audit-Id: e79db893-4e5c-4ee0-8f7c-f91ff1256163
	I0415 19:25:18.188686    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:18.676365    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:18.676365    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:18.676479    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:18.676479    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:18.680782    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:25:18.680782    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:18.680782    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:18.680782    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:18.680782    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:18 GMT
	I0415 19:25:18.680782    2716 round_trippers.go:580]     Audit-Id: 173d487f-e842-460b-8b59-eeec3bb328d3
	I0415 19:25:18.680782    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:18.680782    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:18.681603    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:19.168847    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:19.168997    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:19.168997    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:19.168997    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:19.510246    2716 round_trippers.go:574] Response Status: 200 OK in 340 milliseconds
	I0415 19:25:19.510246    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:19.510246    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:19.510246    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:19.510338    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:19 GMT
	I0415 19:25:19.510338    2716 round_trippers.go:580]     Audit-Id: a6fd8c22-d458-4d90-a9c2-6b2048fd4e38
	I0415 19:25:19.510338    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:19.510338    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:19.510614    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:19.675169    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:19.675169    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:19.675169    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:19.675169    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:19.686796    2716 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0415 19:25:19.687109    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:19.687109    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:19.687109    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:19.687109    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:19 GMT
	I0415 19:25:19.687109    2716 round_trippers.go:580]     Audit-Id: 55a37516-25f9-4537-9307-441fa2b596ab
	I0415 19:25:19.687176    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:19.687176    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:19.688539    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:19.689314    2716 node_ready.go:53] node "multinode-841000" has status "Ready":"False"
	I0415 19:25:19.875679    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:25:19.875679    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:19.876640    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:25:20.167206    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:20.167206    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:20.167206    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:20.167206    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:20.172311    2716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 19:25:20.172311    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:20.172456    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:20.172456    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:20.172456    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:20.172456    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:20.172456    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:20 GMT
	I0415 19:25:20.172456    2716 round_trippers.go:580]     Audit-Id: 054b8ae2-9991-4db7-a332-5112cb975549
	I0415 19:25:20.172767    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:20.214070    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:25:20.214401    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:20.214591    2716 sshutil.go:53] new ssh client: &{IP:172.19.62.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000\id_rsa Username:docker}
	I0415 19:25:20.362464    2716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 19:25:20.673468    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:20.673468    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:20.673468    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:20.673468    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:20.677209    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:20.677209    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:20.677209    2716 round_trippers.go:580]     Audit-Id: cf447d99-e5a8-4a4f-bd48-fea652a8a62e
	I0415 19:25:20.677209    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:20.677209    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:20.677209    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:20.677209    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:20.677209    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:20 GMT
	I0415 19:25:20.677209    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:21.036390    2716 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0415 19:25:21.036390    2716 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0415 19:25:21.036390    2716 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0415 19:25:21.036390    2716 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0415 19:25:21.036390    2716 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0415 19:25:21.036390    2716 command_runner.go:130] > pod/storage-provisioner created
	I0415 19:25:21.178785    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:21.178785    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:21.178785    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:21.178785    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:21.183352    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:25:21.183352    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:21.183352    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:21.183352    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:21 GMT
	I0415 19:25:21.183352    2716 round_trippers.go:580]     Audit-Id: 35c3a732-5b63-4b42-8e84-2ced1a30fea9
	I0415 19:25:21.184358    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:21.184358    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:21.184358    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:21.184507    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:21.669812    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:21.669812    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:21.669812    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:21.669812    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:21.674922    2716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 19:25:21.675198    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:21.675198    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:21.675198    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:21.675198    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:21.675198    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:21 GMT
	I0415 19:25:21.675198    2716 round_trippers.go:580]     Audit-Id: 398eb0fc-7c8d-48e5-8f24-3c88f3a1b09e
	I0415 19:25:21.675198    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:21.675573    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:22.176683    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:22.176794    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:22.176794    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:22.176794    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:22.181422    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:25:22.181422    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:22.181422    2716 round_trippers.go:580]     Audit-Id: 1ed97db8-c500-44dc-ab1a-e9ee18ff1e26
	I0415 19:25:22.181422    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:22.181422    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:22.181422    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:22.181422    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:22.181422    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:22 GMT
	I0415 19:25:22.181804    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:22.182335    2716 node_ready.go:53] node "multinode-841000" has status "Ready":"False"
	I0415 19:25:22.668875    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:22.668875    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:22.668875    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:22.668875    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:22.672480    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:22.672862    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:22.672862    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:22 GMT
	I0415 19:25:22.672862    2716 round_trippers.go:580]     Audit-Id: e56d5f28-d578-45d1-944c-d994261863f7
	I0415 19:25:22.672862    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:22.672862    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:22.672862    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:22.672862    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:22.673236    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:22.675816    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:25:22.675816    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:22.676514    2716 sshutil.go:53] new ssh client: &{IP:172.19.62.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000\id_rsa Username:docker}
	I0415 19:25:22.815934    2716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0415 19:25:22.983976    2716 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0415 19:25:22.985170    2716 round_trippers.go:463] GET https://172.19.62.237:8443/apis/storage.k8s.io/v1/storageclasses
	I0415 19:25:22.985170    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:22.985170    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:22.985170    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:22.988567    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:22.989277    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:22.989277    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:22.989340    2716 round_trippers.go:580]     Content-Length: 1273
	I0415 19:25:22.989340    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:22 GMT
	I0415 19:25:22.989340    2716 round_trippers.go:580]     Audit-Id: 1027dbcc-df4b-471b-bb6f-f54038aaba64
	I0415 19:25:22.989340    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:22.989340    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:22.989402    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:22.989402    2716 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"435"},"items":[{"metadata":{"name":"standard","uid":"522fae16-007d-46c3-bc39-f9b62496ebdd","resourceVersion":"435","creationTimestamp":"2024-04-15T19:25:22Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-15T19:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0415 19:25:22.990347    2716 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"522fae16-007d-46c3-bc39-f9b62496ebdd","resourceVersion":"435","creationTimestamp":"2024-04-15T19:25:22Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-15T19:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0415 19:25:22.990437    2716 round_trippers.go:463] PUT https://172.19.62.237:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0415 19:25:22.990437    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:22.990437    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:22.990437    2716 round_trippers.go:473]     Content-Type: application/json
	I0415 19:25:22.990437    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:22.994755    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:25:22.994755    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:22.994755    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:22.994755    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:22.994755    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:22.994755    2716 round_trippers.go:580]     Content-Length: 1220
	I0415 19:25:22.994755    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:22 GMT
	I0415 19:25:22.994755    2716 round_trippers.go:580]     Audit-Id: a7c3d187-311a-4ac4-b672-dee1f28d959e
	I0415 19:25:22.994755    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:22.994755    2716 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"522fae16-007d-46c3-bc39-f9b62496ebdd","resourceVersion":"435","creationTimestamp":"2024-04-15T19:25:22Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-15T19:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0415 19:25:22.998220    2716 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0415 19:25:23.001505    2716 addons.go:505] duration metric: took 10.5662789s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0415 19:25:23.173027    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:23.173027    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:23.173027    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:23.173027    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:23.176385    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:23.176385    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:23.176385    2716 round_trippers.go:580]     Audit-Id: 7c0cd735-491e-439e-8745-871492a2f428
	I0415 19:25:23.176385    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:23.177293    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:23.177293    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:23.177293    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:23.177293    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:23 GMT
	I0415 19:25:23.177629    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:23.675226    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:23.675226    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:23.675226    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:23.675226    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:23.679838    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:25:23.679838    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:23.679838    2716 round_trippers.go:580]     Audit-Id: 7c238fa8-e693-4d45-8b35-c6f036bb11d0
	I0415 19:25:23.679838    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:23.679838    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:23.679838    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:23.679838    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:23.679838    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:23 GMT
	I0415 19:25:23.680395    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:24.174936    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:24.174936    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:24.174936    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:24.174936    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:24.178355    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:24.178355    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:24.178355    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:24.178355    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:24.178355    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:24 GMT
	I0415 19:25:24.178355    2716 round_trippers.go:580]     Audit-Id: 84c024ca-b1b9-4233-9284-14a056994490
	I0415 19:25:24.179369    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:24.179369    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:24.179468    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:24.674426    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:24.674489    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:24.674539    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:24.674539    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:24.680860    2716 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 19:25:24.680860    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:24.680860    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:24.680860    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:24.680860    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:24.680860    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:24.680860    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:24 GMT
	I0415 19:25:24.680860    2716 round_trippers.go:580]     Audit-Id: 3cb256e8-706c-40f4-b254-a31c63b8dd98
	I0415 19:25:24.681524    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:24.681613    2716 node_ready.go:53] node "multinode-841000" has status "Ready":"False"
	I0415 19:25:25.174606    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:25.174606    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:25.174606    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:25.174724    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:25.179163    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:25:25.179163    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:25.179163    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:25.179163    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:25.179163    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:25.179163    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:25.179163    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:25 GMT
	I0415 19:25:25.179163    2716 round_trippers.go:580]     Audit-Id: 773dab00-294d-49f0-8aba-ecdbe5221693
	I0415 19:25:25.180020    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:25.673992    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:25.674114    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:25.674114    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:25.674114    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:25.681425    2716 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0415 19:25:25.681425    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:25.681425    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:25.681425    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:25.681425    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:25.681425    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:25.681425    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:25 GMT
	I0415 19:25:25.681425    2716 round_trippers.go:580]     Audit-Id: 8dddb1ba-9c41-460b-a970-b1b8edc52163
	I0415 19:25:25.681978    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:26.175179    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:26.175290    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:26.175290    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:26.175290    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:26.180548    2716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 19:25:26.180715    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:26.180760    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:26.180760    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:26.180821    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:26 GMT
	I0415 19:25:26.180885    2716 round_trippers.go:580]     Audit-Id: 28ac2693-229d-46a1-97d2-f6ad22178a7a
	I0415 19:25:26.180910    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:26.180910    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:26.181189    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:26.674402    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:26.674402    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:26.674402    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:26.674402    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:26.681589    2716 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 19:25:26.681633    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:26.681633    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:26 GMT
	I0415 19:25:26.681633    2716 round_trippers.go:580]     Audit-Id: d6dfdc52-f909-49ad-a92d-1687a20beb38
	I0415 19:25:26.681633    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:26.681701    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:26.681701    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:26.681701    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:26.681971    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"367","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4935 chars]
	I0415 19:25:26.682512    2716 node_ready.go:53] node "multinode-841000" has status "Ready":"False"
	I0415 19:25:27.179466    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:27.179466    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:27.179466    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:27.179466    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:27.185213    2716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 19:25:27.185818    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:27.185818    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:27 GMT
	I0415 19:25:27.185818    2716 round_trippers.go:580]     Audit-Id: 7f1378cf-dd16-4dc3-825d-57c590da8e1f
	I0415 19:25:27.185818    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:27.185886    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:27.185886    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:27.185886    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:27.186028    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"439","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4790 chars]
	I0415 19:25:27.186734    2716 node_ready.go:49] node "multinode-841000" has status "Ready":"True"
	I0415 19:25:27.186762    2716 node_ready.go:38] duration metric: took 14.0199861s for node "multinode-841000" to be "Ready" ...
	I0415 19:25:27.186762    2716 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 19:25:27.186762    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods
	I0415 19:25:27.186762    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:27.186762    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:27.186762    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:27.198256    2716 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0415 19:25:27.198256    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:27.198256    2716 round_trippers.go:580]     Audit-Id: fa7eaaad-892e-49e6-b002-1dd49bffdb44
	I0415 19:25:27.198256    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:27.198256    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:27.198256    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:27.199249    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:27.199249    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:27 GMT
	I0415 19:25:27.200171    2716 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"coredns-76f75df574-vqqtx","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"5cce6545-fec3-4334-9041-de82b0e42801","resourceVersion":"443","creationTimestamp":"2024-04-15T19:25:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"83780525-0642-4265-aa15-7ef8ee4dcb17","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:25:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83780525-0642-4265-aa15-7ef8ee4dcb17\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56336 chars]
	I0415 19:25:27.206176    2716 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-vqqtx" in "kube-system" namespace to be "Ready" ...
	I0415 19:25:27.206176    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-vqqtx
	I0415 19:25:27.206176    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:27.206176    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:27.206176    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:27.211169    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:25:27.211169    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:27.211251    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:27.211251    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:27.211251    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:27 GMT
	I0415 19:25:27.211251    2716 round_trippers.go:580]     Audit-Id: fef8e4f7-0d1c-4e7b-91ca-7689225a0965
	I0415 19:25:27.211251    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:27.211251    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:27.211445    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-vqqtx","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"5cce6545-fec3-4334-9041-de82b0e42801","resourceVersion":"443","creationTimestamp":"2024-04-15T19:25:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"83780525-0642-4265-aa15-7ef8ee4dcb17","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:25:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83780525-0642-4265-aa15-7ef8ee4dcb17\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0415 19:25:27.212175    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:27.212175    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:27.212175    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:27.212175    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:27.215583    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:27.216131    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:27.216131    2716 round_trippers.go:580]     Audit-Id: 9a558301-9bb6-47c1-8c02-f972abeb6bb7
	I0415 19:25:27.216131    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:27.216131    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:27.216131    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:27.216131    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:27.216131    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:27 GMT
	I0415 19:25:27.216611    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"439","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4790 chars]
	I0415 19:25:27.720607    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-vqqtx
	I0415 19:25:27.720607    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:27.720607    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:27.720607    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:27.724209    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:27.724209    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:27.724209    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:27.724209    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:27 GMT
	I0415 19:25:27.724209    2716 round_trippers.go:580]     Audit-Id: 5caf761a-4b20-4cae-a0f1-c5d8ce528a58
	I0415 19:25:27.724209    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:27.724209    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:27.724209    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:27.725453    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-vqqtx","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"5cce6545-fec3-4334-9041-de82b0e42801","resourceVersion":"443","creationTimestamp":"2024-04-15T19:25:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"83780525-0642-4265-aa15-7ef8ee4dcb17","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:25:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83780525-0642-4265-aa15-7ef8ee4dcb17\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0415 19:25:27.726505    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:27.726505    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:27.726505    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:27.726505    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:27.730085    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:27.730085    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:27.730085    2716 round_trippers.go:580]     Audit-Id: 856e4f75-5138-4018-a158-bdcc9a9f1fc1
	I0415 19:25:27.730085    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:27.730085    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:27.730085    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:27.730085    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:27.730085    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:27 GMT
	I0415 19:25:27.731111    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"439","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4790 chars]
	I0415 19:25:28.210989    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-vqqtx
	I0415 19:25:28.210989    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:28.210989    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:28.210989    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:28.215609    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:25:28.215609    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:28.215609    2716 round_trippers.go:580]     Audit-Id: 3769fd09-bbb9-49af-8504-af4ec44b2089
	I0415 19:25:28.215609    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:28.215609    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:28.215609    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:28.215609    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:28.215908    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:28 GMT
	I0415 19:25:28.216620    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-vqqtx","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"5cce6545-fec3-4334-9041-de82b0e42801","resourceVersion":"454","creationTimestamp":"2024-04-15T19:25:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"83780525-0642-4265-aa15-7ef8ee4dcb17","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:25:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83780525-0642-4265-aa15-7ef8ee4dcb17\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6807 chars]
	I0415 19:25:28.217935    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:28.217935    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:28.218011    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:28.218011    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:28.220205    2716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 19:25:28.220205    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:28.220205    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:28.220205    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:28.220205    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:28.221274    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:28.221274    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:28 GMT
	I0415 19:25:28.221298    2716 round_trippers.go:580]     Audit-Id: 63cd24d9-2196-4df5-ad2a-9e45561631f3
	I0415 19:25:28.222508    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"439","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4790 chars]
	I0415 19:25:28.709061    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-vqqtx
	I0415 19:25:28.709061    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:28.709235    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:28.709235    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:28.713959    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:25:28.714472    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:28.714472    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:28.714472    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:28 GMT
	I0415 19:25:28.714472    2716 round_trippers.go:580]     Audit-Id: 8abccf35-d146-4b42-9be7-d3966cf6292f
	I0415 19:25:28.714472    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:28.714472    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:28.714599    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:28.714816    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-vqqtx","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"5cce6545-fec3-4334-9041-de82b0e42801","resourceVersion":"454","creationTimestamp":"2024-04-15T19:25:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"83780525-0642-4265-aa15-7ef8ee4dcb17","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:25:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83780525-0642-4265-aa15-7ef8ee4dcb17\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6807 chars]
	I0415 19:25:28.715626    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:28.715626    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:28.715719    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:28.715719    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:28.717995    2716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 19:25:28.719009    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:28.719070    2716 round_trippers.go:580]     Audit-Id: 03720550-17d7-49b4-809e-5f1d8b43483a
	I0415 19:25:28.719070    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:28.719070    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:28.719070    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:28.719070    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:28.719070    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:28 GMT
	I0415 19:25:28.719369    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"439","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4790 chars]
	I0415 19:25:29.210757    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-vqqtx
	I0415 19:25:29.210757    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:29.210757    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:29.210757    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:29.215816    2716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 19:25:29.215816    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:29.215816    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:29.215816    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:29.215816    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:29 GMT
	I0415 19:25:29.215816    2716 round_trippers.go:580]     Audit-Id: a83047c7-123d-4541-ae75-138589f8941e
	I0415 19:25:29.215816    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:29.215816    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:29.215816    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-vqqtx","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"5cce6545-fec3-4334-9041-de82b0e42801","resourceVersion":"460","creationTimestamp":"2024-04-15T19:25:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"83780525-0642-4265-aa15-7ef8ee4dcb17","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:25:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83780525-0642-4265-aa15-7ef8ee4dcb17\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0415 19:25:29.216985    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:29.216985    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:29.216985    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:29.216985    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:29.220580    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:29.220580    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:29.220580    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:29.220580    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:29.220580    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:29.220580    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:29 GMT
	I0415 19:25:29.220580    2716 round_trippers.go:580]     Audit-Id: ebc96133-96f3-47a0-8176-757cba31fe63
	I0415 19:25:29.220580    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:29.221389    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"439","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4790 chars]
	I0415 19:25:29.221955    2716 pod_ready.go:92] pod "coredns-76f75df574-vqqtx" in "kube-system" namespace has status "Ready":"True"
	I0415 19:25:29.221955    2716 pod_ready.go:81] duration metric: took 2.015763s for pod "coredns-76f75df574-vqqtx" in "kube-system" namespace to be "Ready" ...
	I0415 19:25:29.221955    2716 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-841000" in "kube-system" namespace to be "Ready" ...
	I0415 19:25:29.221955    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-841000
	I0415 19:25:29.221955    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:29.221955    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:29.221955    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:29.224783    2716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 19:25:29.224783    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:29.225773    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:29.225773    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:29.225805    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:29.225805    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:29.225805    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:29 GMT
	I0415 19:25:29.225805    2716 round_trippers.go:580]     Audit-Id: 877305b6-4f03-4500-899a-ec1ce64b2a0a
	I0415 19:25:29.226204    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-841000","namespace":"kube-system","uid":"ec0b243b-fd9f-4081-82dc-532086096935","resourceVersion":"420","creationTimestamp":"2024-04-15T19:24:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.62.237:2379","kubernetes.io/config.hash":"e14f778ba3e14a3effd052cdd14002ca","kubernetes.io/config.mirror":"e14f778ba3e14a3effd052cdd14002ca","kubernetes.io/config.seen":"2024-04-15T19:24:49.499002669Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:24:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0415 19:25:29.226859    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:29.226933    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:29.226987    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:29.226987    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:29.228723    2716 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0415 19:25:29.228723    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:29.228723    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:29.228723    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:29.228723    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:29.228723    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:29.228723    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:29 GMT
	I0415 19:25:29.228723    2716 round_trippers.go:580]     Audit-Id: 1a966896-95d1-4476-9966-f1761bd36cd5
	I0415 19:25:29.230050    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"439","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4790 chars]
	I0415 19:25:29.230108    2716 pod_ready.go:92] pod "etcd-multinode-841000" in "kube-system" namespace has status "Ready":"True"
	I0415 19:25:29.230108    2716 pod_ready.go:81] duration metric: took 8.1526ms for pod "etcd-multinode-841000" in "kube-system" namespace to be "Ready" ...
	I0415 19:25:29.230108    2716 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-841000" in "kube-system" namespace to be "Ready" ...
	I0415 19:25:29.230108    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-841000
	I0415 19:25:29.230108    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:29.230108    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:29.230643    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:29.233100    2716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 19:25:29.233100    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:29.233100    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:29 GMT
	I0415 19:25:29.233100    2716 round_trippers.go:580]     Audit-Id: 588ba998-aa85-4d45-9ad8-3e997534c7d9
	I0415 19:25:29.233100    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:29.233498    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:29.233498    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:29.233498    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:29.233793    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-841000","namespace":"kube-system","uid":"092f3aee-b99d-4e46-b42d-ae1b3e2f6c8b","resourceVersion":"419","creationTimestamp":"2024-04-15T19:24:59Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.62.237:8443","kubernetes.io/config.hash":"c06ba545f7155478447169e98c788e3f","kubernetes.io/config.mirror":"c06ba545f7155478447169e98c788e3f","kubernetes.io/config.seen":"2024-04-15T19:24:59.013465769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:24:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0415 19:25:29.234236    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:29.234236    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:29.234236    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:29.234236    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:29.239265    2716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 19:25:29.239404    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:29.239404    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:29.239404    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:29.239404    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:29.239404    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:29 GMT
	I0415 19:25:29.239404    2716 round_trippers.go:580]     Audit-Id: 4008e473-e369-46c2-987d-535707016b4f
	I0415 19:25:29.239404    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:29.239656    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"439","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4790 chars]
	I0415 19:25:29.240493    2716 pod_ready.go:92] pod "kube-apiserver-multinode-841000" in "kube-system" namespace has status "Ready":"True"
	I0415 19:25:29.240493    2716 pod_ready.go:81] duration metric: took 10.3852ms for pod "kube-apiserver-multinode-841000" in "kube-system" namespace to be "Ready" ...
	I0415 19:25:29.240493    2716 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-841000" in "kube-system" namespace to be "Ready" ...
	I0415 19:25:29.240493    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-841000
	I0415 19:25:29.240493    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:29.240493    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:29.240493    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:29.244136    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:29.244294    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:29.244294    2716 round_trippers.go:580]     Audit-Id: da3730a9-8a2e-4990-bf42-f03d354d6f3f
	I0415 19:25:29.244294    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:29.244357    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:29.244357    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:29.244357    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:29.244357    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:29 GMT
	I0415 19:25:29.245148    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-841000","namespace":"kube-system","uid":"8922765c-684e-491a-83a0-e06cec665bbd","resourceVersion":"417","creationTimestamp":"2024-04-15T19:24:59Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9d43b7787e40d9d062807a067e1e26cc","kubernetes.io/config.mirror":"9d43b7787e40d9d062807a067e1e26cc","kubernetes.io/config.seen":"2024-04-15T19:24:59.013467070Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:24:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0415 19:25:29.245148    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:29.245148    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:29.245148    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:29.245148    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:29.248301    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:29.248301    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:29.248301    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:29.248301    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:29 GMT
	I0415 19:25:29.248301    2716 round_trippers.go:580]     Audit-Id: 6d6be13f-1c81-44ed-a5a8-0aea1b6a2020
	I0415 19:25:29.248301    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:29.248301    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:29.248301    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:29.248301    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"439","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4790 chars]
	I0415 19:25:29.248301    2716 pod_ready.go:92] pod "kube-controller-manager-multinode-841000" in "kube-system" namespace has status "Ready":"True"
	I0415 19:25:29.248301    2716 pod_ready.go:81] duration metric: took 7.8084ms for pod "kube-controller-manager-multinode-841000" in "kube-system" namespace to be "Ready" ...
	I0415 19:25:29.248301    2716 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7v79z" in "kube-system" namespace to be "Ready" ...
	I0415 19:25:29.248301    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7v79z
	I0415 19:25:29.248301    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:29.248301    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:29.248301    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:29.251774    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:29.251774    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:29.251774    2716 round_trippers.go:580]     Audit-Id: 89894b9e-8b20-488c-9b09-015d5270a899
	I0415 19:25:29.251774    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:29.251774    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:29.251774    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:29.251774    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:29.251774    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:29 GMT
	I0415 19:25:29.251774    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7v79z","generateName":"kube-proxy-","namespace":"kube-system","uid":"0a08abf8-9fa3-4fab-86cc-1b709bc0d263","resourceVersion":"414","creationTimestamp":"2024-04-15T19:25:12Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c07d15d0-ec90-403c-8aa0-1c81c17e9eec","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:25:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c07d15d0-ec90-403c-8aa0-1c81c17e9eec\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0415 19:25:29.253589    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:29.253655    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:29.253655    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:29.253655    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:29.256237    2716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 19:25:29.256727    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:29.256727    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:29.256727    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:29.256727    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:29.256727    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:29.256727    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:29 GMT
	I0415 19:25:29.256844    2716 round_trippers.go:580]     Audit-Id: cdffaddd-d3d0-4aac-b6b2-4192ca31bf0d
	I0415 19:25:29.257183    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"439","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4790 chars]
	I0415 19:25:29.257584    2716 pod_ready.go:92] pod "kube-proxy-7v79z" in "kube-system" namespace has status "Ready":"True"
	I0415 19:25:29.257648    2716 pod_ready.go:81] duration metric: took 9.347ms for pod "kube-proxy-7v79z" in "kube-system" namespace to be "Ready" ...
	I0415 19:25:29.257704    2716 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-841000" in "kube-system" namespace to be "Ready" ...
	I0415 19:25:29.414353    2716 request.go:629] Waited for 156.5895ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-841000
	I0415 19:25:29.414353    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-841000
	I0415 19:25:29.414353    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:29.414353    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:29.414353    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:29.417953    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:29.418983    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:29.418983    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:29.418983    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:29.419016    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:29.419016    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:29 GMT
	I0415 19:25:29.419016    2716 round_trippers.go:580]     Audit-Id: 5589bc83-da3e-4372-b8c8-e5dd13256b78
	I0415 19:25:29.419016    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:29.419215    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-841000","namespace":"kube-system","uid":"67374ab1-2ea0-4b43-82b8-1b666d274f2f","resourceVersion":"418","creationTimestamp":"2024-04-15T19:24:59Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4a04a4641e7860cc5b6e00042829e3c0","kubernetes.io/config.mirror":"4a04a4641e7860cc5b6e00042829e3c0","kubernetes.io/config.seen":"2024-04-15T19:24:59.013468170Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:24:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0415 19:25:29.619907    2716 request.go:629] Waited for 199.7705ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:29.619907    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:25:29.619907    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:29.619907    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:29.619907    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:29.623511    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:29.623511    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:29.623511    2716 round_trippers.go:580]     Audit-Id: fcd480a9-bd58-4a01-8cdd-a19aeda905ac
	I0415 19:25:29.623511    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:29.623511    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:29.623511    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:29.623511    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:29.623944    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:29 GMT
	I0415 19:25:29.624800    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"439","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4790 chars]
	I0415 19:25:29.625321    2716 pod_ready.go:92] pod "kube-scheduler-multinode-841000" in "kube-system" namespace has status "Ready":"True"
	I0415 19:25:29.625409    2716 pod_ready.go:81] duration metric: took 367.6145ms for pod "kube-scheduler-multinode-841000" in "kube-system" namespace to be "Ready" ...
	I0415 19:25:29.625409    2716 pod_ready.go:38] duration metric: took 2.4386278s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 19:25:29.625498    2716 api_server.go:52] waiting for apiserver process to appear ...
	I0415 19:25:29.640280    2716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 19:25:29.668788    2716 command_runner.go:130] > 2019
	I0415 19:25:29.669807    2716 api_server.go:72] duration metric: took 17.2348031s to wait for apiserver process to appear ...
	I0415 19:25:29.669890    2716 api_server.go:88] waiting for apiserver healthz status ...
	I0415 19:25:29.669962    2716 api_server.go:253] Checking apiserver healthz at https://172.19.62.237:8443/healthz ...
	I0415 19:25:29.676471    2716 api_server.go:279] https://172.19.62.237:8443/healthz returned 200:
	ok
	I0415 19:25:29.677272    2716 round_trippers.go:463] GET https://172.19.62.237:8443/version
	I0415 19:25:29.677272    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:29.677272    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:29.677272    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:29.678840    2716 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0415 19:25:29.679442    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:29.679442    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:29.679442    2716 round_trippers.go:580]     Content-Length: 263
	I0415 19:25:29.679508    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:29 GMT
	I0415 19:25:29.679508    2716 round_trippers.go:580]     Audit-Id: 8fb8f1c3-a19b-4ec4-84b8-6c2e25aaf9ed
	I0415 19:25:29.679583    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:29.679583    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:29.679583    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:29.679583    2716 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "29",
	  "gitVersion": "v1.29.3",
	  "gitCommit": "6813625b7cd706db5bc7388921be03071e1a492d",
	  "gitTreeState": "clean",
	  "buildDate": "2024-03-14T23:58:36Z",
	  "goVersion": "go1.21.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0415 19:25:29.679715    2716 api_server.go:141] control plane version: v1.29.3
	I0415 19:25:29.679715    2716 api_server.go:131] duration metric: took 9.8248ms to wait for apiserver health ...
	I0415 19:25:29.679715    2716 system_pods.go:43] waiting for kube-system pods to appear ...
	I0415 19:25:29.821517    2716 request.go:629] Waited for 141.5559ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods
	I0415 19:25:29.821517    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods
	I0415 19:25:29.821517    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:29.821517    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:29.821517    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:29.827135    2716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 19:25:29.827135    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:29.827135    2716 round_trippers.go:580]     Audit-Id: af0f0293-ed9b-42ec-9630-d0cc0ac3eb59
	I0415 19:25:29.827639    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:29.827639    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:29.827639    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:29.827639    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:29.827639    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:29 GMT
	I0415 19:25:29.831888    2716 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"464"},"items":[{"metadata":{"name":"coredns-76f75df574-vqqtx","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"5cce6545-fec3-4334-9041-de82b0e42801","resourceVersion":"460","creationTimestamp":"2024-04-15T19:25:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"83780525-0642-4265-aa15-7ef8ee4dcb17","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:25:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83780525-0642-4265-aa15-7ef8ee4dcb17\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56450 chars]
	I0415 19:25:29.835249    2716 system_pods.go:59] 8 kube-system pods found
	I0415 19:25:29.835249    2716 system_pods.go:61] "coredns-76f75df574-vqqtx" [5cce6545-fec3-4334-9041-de82b0e42801] Running
	I0415 19:25:29.835249    2716 system_pods.go:61] "etcd-multinode-841000" [ec0b243b-fd9f-4081-82dc-532086096935] Running
	I0415 19:25:29.835249    2716 system_pods.go:61] "kindnet-zrzd6" [53c9b26b-4969-46c3-ba6e-f831423010a8] Running
	I0415 19:25:29.835249    2716 system_pods.go:61] "kube-apiserver-multinode-841000" [092f3aee-b99d-4e46-b42d-ae1b3e2f6c8b] Running
	I0415 19:25:29.835249    2716 system_pods.go:61] "kube-controller-manager-multinode-841000" [8922765c-684e-491a-83a0-e06cec665bbd] Running
	I0415 19:25:29.835249    2716 system_pods.go:61] "kube-proxy-7v79z" [0a08abf8-9fa3-4fab-86cc-1b709bc0d263] Running
	I0415 19:25:29.835249    2716 system_pods.go:61] "kube-scheduler-multinode-841000" [67374ab1-2ea0-4b43-82b8-1b666d274f2f] Running
	I0415 19:25:29.835249    2716 system_pods.go:61] "storage-provisioner" [d93f9b0a-834d-4028-ae0d-5e1287ef5b9e] Running
	I0415 19:25:29.835249    2716 system_pods.go:74] duration metric: took 155.5324ms to wait for pod list to return data ...
	I0415 19:25:29.835249    2716 default_sa.go:34] waiting for default service account to be created ...
	I0415 19:25:30.023615    2716 request.go:629] Waited for 188.1719ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.237:8443/api/v1/namespaces/default/serviceaccounts
	I0415 19:25:30.023615    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/default/serviceaccounts
	I0415 19:25:30.023615    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:30.023615    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:30.023862    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:30.027264    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:30.028024    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:30.028024    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:30 GMT
	I0415 19:25:30.028024    2716 round_trippers.go:580]     Audit-Id: ec077c93-db7e-40cc-8490-ed09389a771b
	I0415 19:25:30.028024    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:30.028024    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:30.028024    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:30.028024    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:30.028024    2716 round_trippers.go:580]     Content-Length: 261
	I0415 19:25:30.028024    2716 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"465"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"d2cffbc1-13e4-4afc-b8e1-a84c6688a045","resourceVersion":"336","creationTimestamp":"2024-04-15T19:25:12Z"}}]}
	I0415 19:25:30.028024    2716 default_sa.go:45] found service account: "default"
	I0415 19:25:30.028024    2716 default_sa.go:55] duration metric: took 192.7738ms for default service account to be created ...
	I0415 19:25:30.028024    2716 system_pods.go:116] waiting for k8s-apps to be running ...
	I0415 19:25:30.224638    2716 request.go:629] Waited for 195.8866ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods
	I0415 19:25:30.224638    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods
	I0415 19:25:30.224638    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:30.224638    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:30.224638    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:30.241503    2716 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0415 19:25:30.241503    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:30.241503    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:30.241503    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:30.241503    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:30.241503    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:30.241503    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:30 GMT
	I0415 19:25:30.241503    2716 round_trippers.go:580]     Audit-Id: 475cb4a0-1a82-4504-b709-17a6153ac252
	I0415 19:25:30.243203    2716 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"467"},"items":[{"metadata":{"name":"coredns-76f75df574-vqqtx","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"5cce6545-fec3-4334-9041-de82b0e42801","resourceVersion":"460","creationTimestamp":"2024-04-15T19:25:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"83780525-0642-4265-aa15-7ef8ee4dcb17","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:25:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83780525-0642-4265-aa15-7ef8ee4dcb17\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56450 chars]
	I0415 19:25:30.246262    2716 system_pods.go:86] 8 kube-system pods found
	I0415 19:25:30.246411    2716 system_pods.go:89] "coredns-76f75df574-vqqtx" [5cce6545-fec3-4334-9041-de82b0e42801] Running
	I0415 19:25:30.246411    2716 system_pods.go:89] "etcd-multinode-841000" [ec0b243b-fd9f-4081-82dc-532086096935] Running
	I0415 19:25:30.246411    2716 system_pods.go:89] "kindnet-zrzd6" [53c9b26b-4969-46c3-ba6e-f831423010a8] Running
	I0415 19:25:30.246529    2716 system_pods.go:89] "kube-apiserver-multinode-841000" [092f3aee-b99d-4e46-b42d-ae1b3e2f6c8b] Running
	I0415 19:25:30.246529    2716 system_pods.go:89] "kube-controller-manager-multinode-841000" [8922765c-684e-491a-83a0-e06cec665bbd] Running
	I0415 19:25:30.246529    2716 system_pods.go:89] "kube-proxy-7v79z" [0a08abf8-9fa3-4fab-86cc-1b709bc0d263] Running
	I0415 19:25:30.246529    2716 system_pods.go:89] "kube-scheduler-multinode-841000" [67374ab1-2ea0-4b43-82b8-1b666d274f2f] Running
	I0415 19:25:30.246529    2716 system_pods.go:89] "storage-provisioner" [d93f9b0a-834d-4028-ae0d-5e1287ef5b9e] Running
	I0415 19:25:30.246590    2716 system_pods.go:126] duration metric: took 218.5033ms to wait for k8s-apps to be running ...
	I0415 19:25:30.246648    2716 system_svc.go:44] waiting for kubelet service to be running ....
	I0415 19:25:30.261020    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 19:25:30.290310    2716 system_svc.go:56] duration metric: took 43.72ms WaitForService to wait for kubelet
	I0415 19:25:30.290310    2716 kubeadm.go:576] duration metric: took 17.8553016s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 19:25:30.290441    2716 node_conditions.go:102] verifying NodePressure condition ...
	I0415 19:25:30.413685    2716 request.go:629] Waited for 122.9101ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.237:8443/api/v1/nodes
	I0415 19:25:30.413969    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes
	I0415 19:25:30.413969    2716 round_trippers.go:469] Request Headers:
	I0415 19:25:30.413969    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:25:30.413969    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:25:30.417798    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:25:30.417798    2716 round_trippers.go:577] Response Headers:
	I0415 19:25:30.417798    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:25:30.417798    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:25:30.417798    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:25:30.417798    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:25:30 GMT
	I0415 19:25:30.417798    2716 round_trippers.go:580]     Audit-Id: c19975c5-ae6b-43e5-9cdf-995055fceb8b
	I0415 19:25:30.417798    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:25:30.418457    2716 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"467"},"items":[{"metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"465","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"mana
gedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1" [truncated 5019 chars]
	I0415 19:25:30.418990    2716 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0415 19:25:30.419129    2716 node_conditions.go:123] node cpu capacity is 2
	I0415 19:25:30.419197    2716 node_conditions.go:105] duration metric: took 128.7544ms to run NodePressure ...
	I0415 19:25:30.419197    2716 start.go:240] waiting for startup goroutines ...
	I0415 19:25:30.419197    2716 start.go:245] waiting for cluster config update ...
	I0415 19:25:30.419197    2716 start.go:254] writing updated cluster config ...
	I0415 19:25:30.423355    2716 out.go:177] 
	I0415 19:25:30.433747    2716 config.go:182] Loaded profile config "multinode-841000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 19:25:30.433747    2716 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\config.json ...
	I0415 19:25:30.441803    2716 out.go:177] * Starting "multinode-841000-m02" worker node in "multinode-841000" cluster
	I0415 19:25:30.444513    2716 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 19:25:30.444513    2716 cache.go:56] Caching tarball of preloaded images
	I0415 19:25:30.444725    2716 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 19:25:30.444725    2716 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 19:25:30.444725    2716 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\config.json ...
	I0415 19:25:30.448717    2716 start.go:360] acquireMachinesLock for multinode-841000-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 19:25:30.448717    2716 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-841000-m02"
	I0415 19:25:30.448717    2716 start.go:93] Provisioning new machine with config: &{Name:multinode-841000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.3 ClusterName:multinode-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.62.237 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0415 19:25:30.448717    2716 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0415 19:25:30.451655    2716 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 19:25:30.452652    2716 start.go:159] libmachine.API.Create for "multinode-841000" (driver="hyperv")
	I0415 19:25:30.452652    2716 client.go:168] LocalClient.Create starting
	I0415 19:25:30.452652    2716 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0415 19:25:30.452652    2716 main.go:141] libmachine: Decoding PEM data...
	I0415 19:25:30.452652    2716 main.go:141] libmachine: Parsing certificate...
	I0415 19:25:30.452652    2716 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0415 19:25:30.452652    2716 main.go:141] libmachine: Decoding PEM data...
	I0415 19:25:30.452652    2716 main.go:141] libmachine: Parsing certificate...
	I0415 19:25:30.452652    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0415 19:25:32.484504    2716 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0415 19:25:32.485123    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:32.485123    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0415 19:25:34.315782    2716 main.go:141] libmachine: [stdout =====>] : False
	
	I0415 19:25:34.315848    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:34.315848    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 19:25:35.924286    2716 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 19:25:35.924286    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:35.924649    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 19:25:39.877176    2716 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 19:25:39.877451    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:39.880044    2716 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0415 19:25:40.414681    2716 main.go:141] libmachine: Creating SSH key...
	I0415 19:25:40.681566    2716 main.go:141] libmachine: Creating VM...
	I0415 19:25:40.681566    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0415 19:25:43.813744    2716 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0415 19:25:43.813744    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:43.814140    2716 main.go:141] libmachine: Using switch "Default Switch"
	I0415 19:25:43.814524    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0415 19:25:45.695906    2716 main.go:141] libmachine: [stdout =====>] : True
	
	I0415 19:25:45.696338    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:45.696338    2716 main.go:141] libmachine: Creating VHD
	I0415 19:25:45.696533    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0415 19:25:49.717524    2716 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : B52D9905-E0B9-4EC9-BCF9-7C8D0946F959
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0415 19:25:49.717524    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:49.718564    2716 main.go:141] libmachine: Writing magic tar header
	I0415 19:25:49.718601    2716 main.go:141] libmachine: Writing SSH key tar header
	I0415 19:25:49.728605    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0415 19:25:53.119108    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:25:53.119108    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:53.119630    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m02\disk.vhd' -SizeBytes 20000MB
	I0415 19:25:55.876757    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:25:55.876757    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:55.876757    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-841000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0415 19:25:59.821865    2716 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-841000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0415 19:25:59.822419    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:25:59.822485    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-841000-m02 -DynamicMemoryEnabled $false
	I0415 19:26:02.285447    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:26:02.285447    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:02.285447    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-841000-m02 -Count 2
	I0415 19:26:04.637133    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:26:04.637133    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:04.637133    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-841000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m02\boot2docker.iso'
	I0415 19:26:07.432080    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:26:07.432080    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:07.432080    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-841000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m02\disk.vhd'
	I0415 19:26:10.317032    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:26:10.318096    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:10.318096    2716 main.go:141] libmachine: Starting VM...
	I0415 19:26:10.318147    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-841000-m02
	I0415 19:26:13.637619    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:26:13.637619    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:13.637619    2716 main.go:141] libmachine: Waiting for host to start...
	I0415 19:26:13.637963    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:26:16.139550    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:26:16.140498    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:16.140498    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:26:18.868904    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:26:18.868904    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:19.884310    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:26:22.305674    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:26:22.305674    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:22.305832    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:26:25.067708    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:26:25.067708    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:26.075215    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:26:28.515976    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:26:28.515976    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:28.516755    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:26:31.291431    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:26:31.291431    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:32.306916    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:26:34.733876    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:26:34.733876    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:34.734808    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:26:37.458005    2716 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:26:37.458650    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:38.462177    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:26:40.885499    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:26:40.885499    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:40.885499    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:26:43.720556    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:26:43.720556    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:43.720556    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:26:46.106444    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:26:46.107194    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:46.107194    2716 machine.go:94] provisionDockerMachine start ...
	I0415 19:26:46.107300    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:26:48.465768    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:26:48.465768    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:48.465768    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:26:51.247899    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:26:51.248957    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:51.255858    2716 main.go:141] libmachine: Using SSH client type: native
	I0415 19:26:51.264764    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.55.167 22 <nil> <nil>}
	I0415 19:26:51.264764    2716 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 19:26:51.416489    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0415 19:26:51.416489    2716 buildroot.go:166] provisioning hostname "multinode-841000-m02"
	I0415 19:26:51.416489    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:26:53.746912    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:26:53.747745    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:53.747745    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:26:56.518120    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:26:56.518730    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:56.525102    2716 main.go:141] libmachine: Using SSH client type: native
	I0415 19:26:56.525728    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.55.167 22 <nil> <nil>}
	I0415 19:26:56.525728    2716 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-841000-m02 && echo "multinode-841000-m02" | sudo tee /etc/hostname
	I0415 19:26:56.692642    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-841000-m02
	
	I0415 19:26:56.692766    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:26:58.998121    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:26:58.998121    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:26:58.998121    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:27:01.732214    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:27:01.732214    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:01.739502    2716 main.go:141] libmachine: Using SSH client type: native
	I0415 19:27:01.740235    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.55.167 22 <nil> <nil>}
	I0415 19:27:01.740235    2716 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-841000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-841000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-841000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 19:27:01.897396    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 19:27:01.897462    2716 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0415 19:27:01.897462    2716 buildroot.go:174] setting up certificates
	I0415 19:27:01.897462    2716 provision.go:84] configureAuth start
	I0415 19:27:01.897462    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:27:04.195532    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:27:04.195532    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:04.195532    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:27:06.956088    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:27:06.956088    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:06.957124    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:27:09.284697    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:27:09.284697    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:09.285465    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:27:12.052276    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:27:12.053064    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:12.053064    2716 provision.go:143] copyHostCerts
	I0415 19:27:12.053064    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0415 19:27:12.053064    2716 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0415 19:27:12.053064    2716 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0415 19:27:12.054089    2716 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0415 19:27:12.054810    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0415 19:27:12.055514    2716 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0415 19:27:12.055577    2716 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0415 19:27:12.055577    2716 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0415 19:27:12.056814    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0415 19:27:12.057065    2716 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0415 19:27:12.057265    2716 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0415 19:27:12.057440    2716 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0415 19:27:12.058060    2716 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-841000-m02 san=[127.0.0.1 172.19.55.167 localhost minikube multinode-841000-m02]
	I0415 19:27:12.345155    2716 provision.go:177] copyRemoteCerts
	I0415 19:27:12.358149    2716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 19:27:12.359154    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:27:14.692284    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:27:14.692284    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:14.693224    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:27:17.471628    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:27:17.471628    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:17.472723    2716 sshutil.go:53] new ssh client: &{IP:172.19.55.167 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m02\id_rsa Username:docker}
	I0415 19:27:17.585687    2716 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.2274964s)
	I0415 19:27:17.585687    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0415 19:27:17.586690    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0415 19:27:17.637828    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0415 19:27:17.638819    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 19:27:17.688236    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0415 19:27:17.688236    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0415 19:27:17.736238    2716 provision.go:87] duration metric: took 15.8386475s to configureAuth
	I0415 19:27:17.736312    2716 buildroot.go:189] setting minikube options for container-runtime
	I0415 19:27:17.736889    2716 config.go:182] Loaded profile config "multinode-841000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 19:27:17.736998    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:27:20.064330    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:27:20.064991    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:20.065106    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:27:22.840388    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:27:22.840388    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:22.847718    2716 main.go:141] libmachine: Using SSH client type: native
	I0415 19:27:22.848418    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.55.167 22 <nil> <nil>}
	I0415 19:27:22.848418    2716 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 19:27:22.997580    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0415 19:27:22.997580    2716 buildroot.go:70] root file system type: tmpfs
	I0415 19:27:22.997859    2716 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 19:27:22.998019    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:27:25.303520    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:27:25.303520    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:25.303520    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:27:28.140886    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:27:28.140886    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:28.147106    2716 main.go:141] libmachine: Using SSH client type: native
	I0415 19:27:28.147868    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.55.167 22 <nil> <nil>}
	I0415 19:27:28.147868    2716 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.62.237"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 19:27:28.330556    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.62.237
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 19:27:28.330556    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:27:30.713135    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:27:30.713135    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:30.713507    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:27:33.501213    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:27:33.501213    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:33.508843    2716 main.go:141] libmachine: Using SSH client type: native
	I0415 19:27:33.509550    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.55.167 22 <nil> <nil>}
	I0415 19:27:33.509550    2716 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 19:27:35.731283    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0415 19:27:35.731283    2716 machine.go:97] duration metric: took 49.6236878s to provisionDockerMachine
	I0415 19:27:35.731721    2716 client.go:171] duration metric: took 2m5.278048s to LocalClient.Create
	I0415 19:27:35.731721    2716 start.go:167] duration metric: took 2m5.278048s to libmachine.API.Create "multinode-841000"
	I0415 19:27:35.731721    2716 start.go:293] postStartSetup for "multinode-841000-m02" (driver="hyperv")
	I0415 19:27:35.731721    2716 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 19:27:35.746795    2716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 19:27:35.746795    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:27:38.048338    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:27:38.048448    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:38.048631    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:27:40.817973    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:27:40.818110    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:40.818110    2716 sshutil.go:53] new ssh client: &{IP:172.19.55.167 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m02\id_rsa Username:docker}
	I0415 19:27:40.928323    2716 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1814858s)
	I0415 19:27:40.944198    2716 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 19:27:40.950768    2716 command_runner.go:130] > NAME=Buildroot
	I0415 19:27:40.950768    2716 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0415 19:27:40.950768    2716 command_runner.go:130] > ID=buildroot
	I0415 19:27:40.950768    2716 command_runner.go:130] > VERSION_ID=2023.02.9
	I0415 19:27:40.950768    2716 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0415 19:27:40.950880    2716 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 19:27:40.950959    2716 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0415 19:27:40.951396    2716 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0415 19:27:40.952411    2716 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> 112722.pem in /etc/ssl/certs
	I0415 19:27:40.952411    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /etc/ssl/certs/112722.pem
	I0415 19:27:40.966384    2716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 19:27:40.986069    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /etc/ssl/certs/112722.pem (1708 bytes)
	I0415 19:27:41.037354    2716 start.go:296] duration metric: took 5.3055899s for postStartSetup
	I0415 19:27:41.040184    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:27:43.390814    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:27:43.390814    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:43.391031    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:27:46.169335    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:27:46.169335    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:46.169785    2716 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\config.json ...
	I0415 19:27:46.172760    2716 start.go:128] duration metric: took 2m15.7229367s to createHost
	I0415 19:27:46.172913    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:27:48.496523    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:27:48.496523    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:48.496789    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:27:51.266508    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:27:51.266508    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:51.276633    2716 main.go:141] libmachine: Using SSH client type: native
	I0415 19:27:51.277277    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.55.167 22 <nil> <nil>}
	I0415 19:27:51.277277    2716 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0415 19:27:51.418809    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713209271.420392987
	
	I0415 19:27:51.418961    2716 fix.go:216] guest clock: 1713209271.420392987
	I0415 19:27:51.418961    2716 fix.go:229] Guest: 2024-04-15 19:27:51.420392987 +0000 UTC Remote: 2024-04-15 19:27:46.1728414 +0000 UTC m=+366.295365001 (delta=5.247551587s)
	I0415 19:27:51.419072    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:27:53.750298    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:27:53.750851    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:53.750851    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:27:56.547503    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:27:56.547503    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:56.554566    2716 main.go:141] libmachine: Using SSH client type: native
	I0415 19:27:56.554566    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.55.167 22 <nil> <nil>}
	I0415 19:27:56.555506    2716 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713209271
	I0415 19:27:56.714182    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 15 19:27:51 UTC 2024
	
	I0415 19:27:56.714182    2716 fix.go:236] clock set: Mon Apr 15 19:27:51 UTC 2024
	 (err=<nil>)
	I0415 19:27:56.714182    2716 start.go:83] releasing machines lock for "multinode-841000-m02", held for 2m26.264273s
	I0415 19:27:56.715141    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:27:59.007019    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:27:59.008018    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:27:59.008111    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:28:01.759720    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:28:01.759720    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:28:01.763252    2716 out.go:177] * Found network options:
	I0415 19:28:01.766577    2716 out.go:177]   - NO_PROXY=172.19.62.237
	W0415 19:28:01.771275    2716 proxy.go:119] fail to check proxy env: Error ip not in block
	I0415 19:28:01.774032    2716 out.go:177]   - NO_PROXY=172.19.62.237
	W0415 19:28:01.775746    2716 proxy.go:119] fail to check proxy env: Error ip not in block
	W0415 19:28:01.776486    2716 proxy.go:119] fail to check proxy env: Error ip not in block
	I0415 19:28:01.779879    2716 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 19:28:01.779879    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:28:01.793243    2716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0415 19:28:01.793243    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:28:04.167298    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:28:04.167298    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:28:04.167393    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:28:04.167549    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:28:04.167619    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:28:04.167619    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:28:06.989044    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:28:06.989044    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:28:06.989044    2716 sshutil.go:53] new ssh client: &{IP:172.19.55.167 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m02\id_rsa Username:docker}
	I0415 19:28:07.022259    2716 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:28:07.022780    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:28:07.022780    2716 sshutil.go:53] new ssh client: &{IP:172.19.55.167 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m02\id_rsa Username:docker}
	I0415 19:28:07.155974    2716 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0415 19:28:07.155974    2716 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.376051s)
	I0415 19:28:07.155974    2716 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0415 19:28:07.155974    2716 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.3626872s)
	W0415 19:28:07.155974    2716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 19:28:07.170020    2716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 19:28:07.201143    2716 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0415 19:28:07.201427    2716 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 19:28:07.201427    2716 start.go:494] detecting cgroup driver to use...
	I0415 19:28:07.201703    2716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 19:28:07.241005    2716 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0415 19:28:07.255355    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0415 19:28:07.291743    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 19:28:07.311572    2716 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 19:28:07.326255    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 19:28:07.358979    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 19:28:07.394832    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 19:28:07.433543    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 19:28:07.469002    2716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 19:28:07.504081    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 19:28:07.540876    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 19:28:07.577024    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 19:28:07.614539    2716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 19:28:07.636285    2716 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0415 19:28:07.650120    2716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 19:28:07.685591    2716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:28:07.911661    2716 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 19:28:07.946495    2716 start.go:494] detecting cgroup driver to use...
	I0415 19:28:07.961870    2716 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0415 19:28:07.990111    2716 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0415 19:28:07.990170    2716 command_runner.go:130] > [Unit]
	I0415 19:28:07.990170    2716 command_runner.go:130] > Description=Docker Application Container Engine
	I0415 19:28:07.990170    2716 command_runner.go:130] > Documentation=https://docs.docker.com
	I0415 19:28:07.990170    2716 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0415 19:28:07.990170    2716 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0415 19:28:07.990170    2716 command_runner.go:130] > StartLimitBurst=3
	I0415 19:28:07.990170    2716 command_runner.go:130] > StartLimitIntervalSec=60
	I0415 19:28:07.990170    2716 command_runner.go:130] > [Service]
	I0415 19:28:07.990170    2716 command_runner.go:130] > Type=notify
	I0415 19:28:07.990170    2716 command_runner.go:130] > Restart=on-failure
	I0415 19:28:07.990170    2716 command_runner.go:130] > Environment=NO_PROXY=172.19.62.237
	I0415 19:28:07.990170    2716 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0415 19:28:07.990170    2716 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0415 19:28:07.990170    2716 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0415 19:28:07.990170    2716 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0415 19:28:07.990170    2716 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0415 19:28:07.990170    2716 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0415 19:28:07.990170    2716 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0415 19:28:07.990170    2716 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0415 19:28:07.990170    2716 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0415 19:28:07.990170    2716 command_runner.go:130] > ExecStart=
	I0415 19:28:07.990170    2716 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0415 19:28:07.990170    2716 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0415 19:28:07.990170    2716 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0415 19:28:07.990170    2716 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0415 19:28:07.990170    2716 command_runner.go:130] > LimitNOFILE=infinity
	I0415 19:28:07.990170    2716 command_runner.go:130] > LimitNPROC=infinity
	I0415 19:28:07.990170    2716 command_runner.go:130] > LimitCORE=infinity
	I0415 19:28:07.990170    2716 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0415 19:28:07.990170    2716 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0415 19:28:07.990170    2716 command_runner.go:130] > TasksMax=infinity
	I0415 19:28:07.990170    2716 command_runner.go:130] > TimeoutStartSec=0
	I0415 19:28:07.990170    2716 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0415 19:28:07.990170    2716 command_runner.go:130] > Delegate=yes
	I0415 19:28:07.990170    2716 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0415 19:28:07.990766    2716 command_runner.go:130] > KillMode=process
	I0415 19:28:07.990766    2716 command_runner.go:130] > [Install]
	I0415 19:28:07.990766    2716 command_runner.go:130] > WantedBy=multi-user.target
	I0415 19:28:08.005229    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 19:28:08.043923    2716 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 19:28:08.098547    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 19:28:08.141362    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 19:28:08.183554    2716 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0415 19:28:08.256797    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 19:28:08.285417    2716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 19:28:08.323205    2716 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0415 19:28:08.337558    2716 ssh_runner.go:195] Run: which cri-dockerd
	I0415 19:28:08.344700    2716 command_runner.go:130] > /usr/bin/cri-dockerd
	I0415 19:28:08.359602    2716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0415 19:28:08.379434    2716 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0415 19:28:08.432111    2716 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0415 19:28:08.657315    2716 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0415 19:28:08.866222    2716 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0415 19:28:08.866222    2716 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0415 19:28:08.917477    2716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:28:09.144520    2716 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 19:28:11.709306    2716 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5647653s)
	I0415 19:28:11.723184    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0415 19:28:11.761181    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 19:28:11.802747    2716 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0415 19:28:12.016577    2716 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0415 19:28:12.230646    2716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:28:12.451428    2716 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0415 19:28:12.498470    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 19:28:12.539510    2716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:28:12.767354    2716 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0415 19:28:12.899469    2716 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0415 19:28:12.915466    2716 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0415 19:28:12.926277    2716 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0415 19:28:12.926277    2716 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0415 19:28:12.926277    2716 command_runner.go:130] > Device: 0,22	Inode: 871         Links: 1
	I0415 19:28:12.926277    2716 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0415 19:28:12.926277    2716 command_runner.go:130] > Access: 2024-04-15 19:28:12.801135602 +0000
	I0415 19:28:12.926277    2716 command_runner.go:130] > Modify: 2024-04-15 19:28:12.801135602 +0000
	I0415 19:28:12.926277    2716 command_runner.go:130] > Change: 2024-04-15 19:28:12.804135626 +0000
	I0415 19:28:12.926277    2716 command_runner.go:130] >  Birth: -
	I0415 19:28:12.926277    2716 start.go:562] Will wait 60s for crictl version
	I0415 19:28:12.941258    2716 ssh_runner.go:195] Run: which crictl
	I0415 19:28:12.948276    2716 command_runner.go:130] > /usr/bin/crictl
	I0415 19:28:12.965585    2716 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0415 19:28:13.025774    2716 command_runner.go:130] > Version:  0.1.0
	I0415 19:28:13.025867    2716 command_runner.go:130] > RuntimeName:  docker
	I0415 19:28:13.025867    2716 command_runner.go:130] > RuntimeVersion:  26.0.0
	I0415 19:28:13.025867    2716 command_runner.go:130] > RuntimeApiVersion:  v1
	I0415 19:28:13.025999    2716 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0415 19:28:13.037127    2716 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 19:28:13.077162    2716 command_runner.go:130] > 26.0.0
	I0415 19:28:13.087163    2716 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 19:28:13.119653    2716 command_runner.go:130] > 26.0.0
	I0415 19:28:13.126089    2716 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0415 19:28:13.130037    2716 out.go:177]   - env NO_PROXY=172.19.62.237
	I0415 19:28:13.132042    2716 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0415 19:28:13.136031    2716 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0415 19:28:13.136031    2716 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0415 19:28:13.136031    2716 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0415 19:28:13.136031    2716 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:d7:0e Flags:up|broadcast|multicast|running}
	I0415 19:28:13.139073    2716 ip.go:210] interface addr: fe80::6b0:6318:bc6e:fcda/64
	I0415 19:28:13.139073    2716 ip.go:210] interface addr: 172.19.48.1/20
	I0415 19:28:13.154034    2716 ssh_runner.go:195] Run: grep 172.19.48.1	host.minikube.internal$ /etc/hosts
	I0415 19:28:13.161944    2716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 19:28:13.185520    2716 mustload.go:65] Loading cluster: multinode-841000
	I0415 19:28:13.186047    2716 config.go:182] Loaded profile config "multinode-841000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 19:28:13.186241    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:28:15.494144    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:28:15.494144    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:28:15.494144    2716 host.go:66] Checking if "multinode-841000" exists ...
	I0415 19:28:15.494936    2716 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000 for IP: 172.19.55.167
	I0415 19:28:15.494936    2716 certs.go:194] generating shared ca certs ...
	I0415 19:28:15.494936    2716 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 19:28:15.495704    2716 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0415 19:28:15.495704    2716 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0415 19:28:15.496331    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0415 19:28:15.496374    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0415 19:28:15.496374    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0415 19:28:15.496899    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0415 19:28:15.497123    2716 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem (1338 bytes)
	W0415 19:28:15.497834    2716 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272_empty.pem, impossibly tiny 0 bytes
	I0415 19:28:15.497834    2716 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0415 19:28:15.497834    2716 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0415 19:28:15.498355    2716 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0415 19:28:15.498560    2716 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0415 19:28:15.499080    2716 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem (1708 bytes)
	I0415 19:28:15.499173    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem -> /usr/share/ca-certificates/11272.pem
	I0415 19:28:15.499173    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> /usr/share/ca-certificates/112722.pem
	I0415 19:28:15.499173    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0415 19:28:15.499901    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0415 19:28:15.552725    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0415 19:28:15.606153    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0415 19:28:15.659105    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0415 19:28:15.714653    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem --> /usr/share/ca-certificates/11272.pem (1338 bytes)
	I0415 19:28:15.764226    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /usr/share/ca-certificates/112722.pem (1708 bytes)
	I0415 19:28:15.816405    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0415 19:28:15.882737    2716 ssh_runner.go:195] Run: openssl version
	I0415 19:28:15.892015    2716 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0415 19:28:15.906922    2716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112722.pem && ln -fs /usr/share/ca-certificates/112722.pem /etc/ssl/certs/112722.pem"
	I0415 19:28:15.947524    2716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112722.pem
	I0415 19:28:15.955287    2716 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 15 17:58 /usr/share/ca-certificates/112722.pem
	I0415 19:28:15.955287    2716 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 17:58 /usr/share/ca-certificates/112722.pem
	I0415 19:28:15.972221    2716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112722.pem
	I0415 19:28:15.981810    2716 command_runner.go:130] > 3ec20f2e
	I0415 19:28:15.997140    2716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112722.pem /etc/ssl/certs/3ec20f2e.0"
	I0415 19:28:16.033108    2716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0415 19:28:16.069127    2716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0415 19:28:16.078479    2716 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 15 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I0415 19:28:16.079106    2716 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I0415 19:28:16.094998    2716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0415 19:28:16.106524    2716 command_runner.go:130] > b5213941
	I0415 19:28:16.120645    2716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0415 19:28:16.156773    2716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11272.pem && ln -fs /usr/share/ca-certificates/11272.pem /etc/ssl/certs/11272.pem"
	I0415 19:28:16.195649    2716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11272.pem
	I0415 19:28:16.204033    2716 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 15 17:58 /usr/share/ca-certificates/11272.pem
	I0415 19:28:16.204159    2716 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 17:58 /usr/share/ca-certificates/11272.pem
	I0415 19:28:16.218358    2716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11272.pem
	I0415 19:28:16.227375    2716 command_runner.go:130] > 51391683
	I0415 19:28:16.245562    2716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11272.pem /etc/ssl/certs/51391683.0"
	I0415 19:28:16.284626    2716 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 19:28:16.291591    2716 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0415 19:28:16.292009    2716 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0415 19:28:16.292580    2716 kubeadm.go:928] updating node {m02 172.19.55.167 8443 v1.29.3 docker false true} ...
	I0415 19:28:16.292694    2716 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-841000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.55.167
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0415 19:28:16.305779    2716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0415 19:28:16.325845    2716 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	I0415 19:28:16.326145    2716 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0415 19:28:16.338518    2716 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0415 19:28:16.360611    2716 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256
	I0415 19:28:16.360611    2716 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256
	I0415 19:28:16.360611    2716 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0415 19:28:16.360611    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0415 19:28:16.360611    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0415 19:28:16.378597    2716 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0415 19:28:16.379612    2716 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0415 19:28:16.379612    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 19:28:16.385615    2716 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0415 19:28:16.386906    2716 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0415 19:28:16.387193    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0415 19:28:16.388167    2716 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0415 19:28:16.388462    2716 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0415 19:28:16.389410    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0415 19:28:16.448092    2716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0415 19:28:16.462829    2716 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0415 19:28:16.580765    2716 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0415 19:28:16.588812    2716 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0415 19:28:16.589035    2716 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0415 19:28:17.818727    2716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0415 19:28:17.839758    2716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0415 19:28:17.876852    2716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0415 19:28:17.928267    2716 ssh_runner.go:195] Run: grep 172.19.62.237	control-plane.minikube.internal$ /etc/hosts
	I0415 19:28:17.935629    2716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.62.237	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 19:28:17.984995    2716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:28:18.210647    2716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 19:28:18.245059    2716 host.go:66] Checking if "multinode-841000" exists ...
	I0415 19:28:18.245372    2716 start.go:316] joinCluster: &{Name:multinode-841000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:multinode-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.62.237 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.55.167 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 19:28:18.245960    2716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0415 19:28:18.246134    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:28:20.605265    2716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:28:20.605265    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:28:20.606327    2716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:28:23.415032    2716 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:28:23.415032    2716 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:28:23.415826    2716 sshutil.go:53] new ssh client: &{IP:172.19.62.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000\id_rsa Username:docker}
	I0415 19:28:23.636097    2716 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 84gkie.kgltgtunor74f8b0 --discovery-token-ca-cert-hash sha256:64433043496e1f3289bbcb278fdd01bdcea4f93c3a13feee16fa4e3db23f59c2 
	I0415 19:28:23.636097    2716 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0": (5.3900936s)
	I0415 19:28:23.636097    2716 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.19.55.167 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0415 19:28:23.636097    2716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 84gkie.kgltgtunor74f8b0 --discovery-token-ca-cert-hash sha256:64433043496e1f3289bbcb278fdd01bdcea4f93c3a13feee16fa4e3db23f59c2 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-841000-m02"
	I0415 19:28:23.886868    2716 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0415 19:28:25.749478    2716 command_runner.go:130] > [preflight] Running pre-flight checks
	I0415 19:28:25.750148    2716 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0415 19:28:25.750148    2716 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0415 19:28:25.750250    2716 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0415 19:28:25.750250    2716 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0415 19:28:25.750250    2716 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0415 19:28:25.750250    2716 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0415 19:28:25.750319    2716 command_runner.go:130] > This node has joined the cluster:
	I0415 19:28:25.750319    2716 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0415 19:28:25.750319    2716 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0415 19:28:25.750319    2716 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0415 19:28:25.750380    2716 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 84gkie.kgltgtunor74f8b0 --discovery-token-ca-cert-hash sha256:64433043496e1f3289bbcb278fdd01bdcea4f93c3a13feee16fa4e3db23f59c2 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-841000-m02": (2.1142655s)
	I0415 19:28:25.750455    2716 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0415 19:28:26.017314    2716 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0415 19:28:26.263462    2716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-841000-m02 minikube.k8s.io/updated_at=2024_04_15T19_28_26_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c minikube.k8s.io/name=multinode-841000 minikube.k8s.io/primary=false
	I0415 19:28:26.401336    2716 command_runner.go:130] > node/multinode-841000-m02 labeled
	I0415 19:28:26.401476    2716 start.go:318] duration metric: took 8.1560374s to joinCluster
	I0415 19:28:26.401476    2716 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.19.55.167 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0415 19:28:26.404478    2716 out.go:177] * Verifying Kubernetes components...
	I0415 19:28:26.402031    2716 config.go:182] Loaded profile config "multinode-841000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 19:28:26.422932    2716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:28:26.672115    2716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 19:28:26.700599    2716 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 19:28:26.701122    2716 kapi.go:59] client config for multinode-841000: &rest.Config{Host:"https://172.19.62.237:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-841000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-841000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f71600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0415 19:28:26.702127    2716 node_ready.go:35] waiting up to 6m0s for node "multinode-841000-m02" to be "Ready" ...
	I0415 19:28:26.702127    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:26.702127    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:26.702127    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:26.702127    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:26.716133    2716 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0415 19:28:26.716254    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:26.716254    2716 round_trippers.go:580]     Audit-Id: de199570-7367-4ac9-9137-154f849d564e
	I0415 19:28:26.716254    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:26.716254    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:26.716254    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:26.716254    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:26.716254    2716 round_trippers.go:580]     Content-Length: 3927
	I0415 19:28:26.716254    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:26 GMT
	I0415 19:28:26.716254    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"635","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl" [truncated 2903 chars]
	I0415 19:28:27.210993    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:27.211084    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:27.211084    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:27.211084    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:27.214446    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:27.214446    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:27.214446    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:27 GMT
	I0415 19:28:27.215078    2716 round_trippers.go:580]     Audit-Id: e1ced4c7-3bfd-4e2a-b6d3-9cba34ebc436
	I0415 19:28:27.215078    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:27.215078    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:27.215078    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:27.215078    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:27.215137    2716 round_trippers.go:580]     Content-Length: 4036
	I0415 19:28:27.215188    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"638","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3012 chars]
	I0415 19:28:27.710401    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:27.710605    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:27.710605    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:27.710670    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:27.716863    2716 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 19:28:27.716863    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:27.716943    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:27.716943    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:27.716943    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:27.716943    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:27.716943    2716 round_trippers.go:580]     Content-Length: 4036
	I0415 19:28:27.716943    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:27 GMT
	I0415 19:28:27.716943    2716 round_trippers.go:580]     Audit-Id: c3f0e237-b9e1-4e1b-a66f-0c8075c37bab
	I0415 19:28:27.717154    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"638","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3012 chars]
	I0415 19:28:28.208081    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:28.208159    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:28.208159    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:28.208159    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:28.215731    2716 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0415 19:28:28.215843    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:28.215843    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:28.215902    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:28.215902    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:28.215902    2716 round_trippers.go:580]     Content-Length: 4036
	I0415 19:28:28.215953    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:28 GMT
	I0415 19:28:28.215953    2716 round_trippers.go:580]     Audit-Id: 2a6ebd9c-e7ac-4996-99b2-d60d337f9561
	I0415 19:28:28.215953    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:28.216164    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"638","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3012 chars]
	I0415 19:28:28.709525    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:28.709525    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:28.709525    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:28.709525    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:28.713187    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:28.713187    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:28.713187    2716 round_trippers.go:580]     Content-Length: 4036
	I0415 19:28:28.713187    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:28 GMT
	I0415 19:28:28.713896    2716 round_trippers.go:580]     Audit-Id: 70ed5e05-c2de-459f-8b27-d22241dcdbcd
	I0415 19:28:28.713896    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:28.713896    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:28.713896    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:28.713896    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:28.714088    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"638","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3012 chars]
	I0415 19:28:28.714204    2716 node_ready.go:53] node "multinode-841000-m02" has status "Ready":"False"
	I0415 19:28:29.209783    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:29.209783    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:29.209783    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:29.209783    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:29.214392    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:28:29.214392    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:29.214392    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:29 GMT
	I0415 19:28:29.214392    2716 round_trippers.go:580]     Audit-Id: 03d636ca-0936-469f-8f91-3f96b54df795
	I0415 19:28:29.214567    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:29.214567    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:29.214567    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:29.214567    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:29.214611    2716 round_trippers.go:580]     Content-Length: 4036
	I0415 19:28:29.214669    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"638","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3012 chars]
	I0415 19:28:29.708063    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:29.708119    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:29.708119    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:29.708119    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:29.711712    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:29.711712    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:29.712325    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:29.712325    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:29.712325    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:29.712325    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:29.712406    2716 round_trippers.go:580]     Content-Length: 4036
	I0415 19:28:29.712447    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:29 GMT
	I0415 19:28:29.712447    2716 round_trippers.go:580]     Audit-Id: 1cf3d0e8-46a4-412c-b09d-f8a86f5f0afa
	I0415 19:28:29.712666    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"638","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3012 chars]
	I0415 19:28:30.217193    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:30.217193    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:30.217193    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:30.217193    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:30.220927    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:30.220927    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:30.220927    2716 round_trippers.go:580]     Audit-Id: 8fcc0251-e4d1-4444-90aa-c9d488dfc088
	I0415 19:28:30.220927    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:30.220927    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:30.221806    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:30.221806    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:30.221806    2716 round_trippers.go:580]     Content-Length: 4036
	I0415 19:28:30.221852    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:30 GMT
	I0415 19:28:30.221869    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"638","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3012 chars]
	I0415 19:28:30.702637    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:30.702905    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:30.702905    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:30.702905    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:30.708237    2716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 19:28:30.709254    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:30.709286    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:30.709286    2716 round_trippers.go:580]     Content-Length: 4036
	I0415 19:28:30.709286    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:30 GMT
	I0415 19:28:30.709286    2716 round_trippers.go:580]     Audit-Id: b77d69c5-3750-4309-b763-3af292fe3c18
	I0415 19:28:30.709286    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:30.709286    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:30.709286    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:30.709402    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"638","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3012 chars]
	I0415 19:28:31.216890    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:31.216970    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:31.216970    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:31.216970    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:31.221303    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:31.221374    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:31.221483    2716 round_trippers.go:580]     Content-Length: 4036
	I0415 19:28:31.221571    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:31 GMT
	I0415 19:28:31.221853    2716 round_trippers.go:580]     Audit-Id: bf7a4364-6929-4a37-97cd-a9c3cb5b34a4
	I0415 19:28:31.221853    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:31.221853    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:31.221853    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:31.221853    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:31.221853    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"638","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3012 chars]
	I0415 19:28:31.222536    2716 node_ready.go:53] node "multinode-841000-m02" has status "Ready":"False"
	I0415 19:28:31.705205    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:31.705205    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:31.705205    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:31.705205    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:31.709296    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:28:31.709296    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:31.709689    2716 round_trippers.go:580]     Audit-Id: 76d2a746-caa4-4395-b964-078f42cf77d7
	I0415 19:28:31.709689    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:31.709689    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:31.709689    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:31.709689    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:31.709689    2716 round_trippers.go:580]     Content-Length: 4036
	I0415 19:28:31.709689    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:31 GMT
	I0415 19:28:31.709809    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"638","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3012 chars]
	I0415 19:28:32.212183    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:32.212183    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:32.212183    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:32.212183    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:32.221079    2716 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0415 19:28:32.221079    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:32.221079    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:32 GMT
	I0415 19:28:32.221079    2716 round_trippers.go:580]     Audit-Id: c0e4cd6f-4bab-4238-9cfa-49e193d7b46a
	I0415 19:28:32.221079    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:32.221079    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:32.221079    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:32.221079    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:32.221079    2716 round_trippers.go:580]     Content-Length: 4036
	I0415 19:28:32.221079    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"638","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3012 chars]
	I0415 19:28:32.702680    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:32.702720    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:32.702790    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:32.702790    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:32.707667    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:28:32.707744    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:32.707744    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:32.707744    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:32.707744    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:32.707744    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:32.707744    2716 round_trippers.go:580]     Content-Length: 4036
	I0415 19:28:32.707744    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:32 GMT
	I0415 19:28:32.707744    2716 round_trippers.go:580]     Audit-Id: 2edac6f5-1847-42ad-81a4-cc4502513e72
	I0415 19:28:32.708032    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"638","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3012 chars]
	I0415 19:28:33.206652    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:33.206652    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:33.206760    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:33.206760    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:33.211025    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:28:33.211302    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:33.211302    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:33 GMT
	I0415 19:28:33.211302    2716 round_trippers.go:580]     Audit-Id: 8f547bd6-4f43-4170-beac-1c6a6ecf3a5f
	I0415 19:28:33.211302    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:33.211302    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:33.211302    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:33.211426    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:33.211426    2716 round_trippers.go:580]     Content-Length: 4036
	I0415 19:28:33.211615    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"638","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3012 chars]
	I0415 19:28:33.710416    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:33.710416    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:33.710666    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:33.710666    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:33.714046    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:33.714789    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:33.714789    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:33 GMT
	I0415 19:28:33.714789    2716 round_trippers.go:580]     Audit-Id: 3f835fce-512f-4cf4-bce6-4518ac5e9ccc
	I0415 19:28:33.714789    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:33.714789    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:33.714789    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:33.714789    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:33.714789    2716 round_trippers.go:580]     Content-Length: 4036
	I0415 19:28:33.714905    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"638","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3012 chars]
	I0415 19:28:33.715363    2716 node_ready.go:53] node "multinode-841000-m02" has status "Ready":"False"
	I0415 19:28:34.217497    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:34.217497    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:34.217497    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:34.217497    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:34.221974    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:28:34.221974    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:34.221974    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:34.221974    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:34.221974    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:34.221974    2716 round_trippers.go:580]     Content-Length: 4036
	I0415 19:28:34.222995    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:34 GMT
	I0415 19:28:34.222995    2716 round_trippers.go:580]     Audit-Id: 0e8d9fad-d333-49e5-978d-1085326d5235
	I0415 19:28:34.222995    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:34.223187    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"638","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3012 chars]
	I0415 19:28:34.705852    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:34.705852    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:34.705852    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:34.705852    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:34.710471    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:28:34.710471    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:34.710471    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:34 GMT
	I0415 19:28:34.710471    2716 round_trippers.go:580]     Audit-Id: e118ae73-1611-45e2-a266-fc0b966092ec
	I0415 19:28:34.710471    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:34.710471    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:34.710471    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:34.710471    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:34.710471    2716 round_trippers.go:580]     Content-Length: 4036
	I0415 19:28:34.710471    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"638","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3012 chars]
	I0415 19:28:35.211738    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:35.211840    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:35.211840    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:35.211840    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:35.499070    2716 round_trippers.go:574] Response Status: 200 OK in 287 milliseconds
	I0415 19:28:35.499070    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:35.499561    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:35.499561    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:35.499561    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:35.499561    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:35.499561    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:35 GMT
	I0415 19:28:35.499561    2716 round_trippers.go:580]     Audit-Id: 0c882350-f476-4567-86a6-8f3fd8ed0867
	I0415 19:28:35.499799    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"649","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3404 chars]
	I0415 19:28:35.711186    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:35.711186    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:35.711186    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:35.711186    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:35.847232    2716 round_trippers.go:574] Response Status: 200 OK in 136 milliseconds
	I0415 19:28:35.847922    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:35.847922    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:35.847922    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:35.847922    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:35.847922    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:35 GMT
	I0415 19:28:35.847922    2716 round_trippers.go:580]     Audit-Id: e1c35e4c-ff93-4e97-b3c5-0ddbb2d3fe90
	I0415 19:28:35.847922    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:35.847922    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"649","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3404 chars]
	I0415 19:28:35.848678    2716 node_ready.go:53] node "multinode-841000-m02" has status "Ready":"False"
	I0415 19:28:36.213903    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:36.213903    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:36.213903    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:36.213903    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:36.219073    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:28:36.219073    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:36.219073    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:36.219073    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:36 GMT
	I0415 19:28:36.219073    2716 round_trippers.go:580]     Audit-Id: 05a725cb-4b91-4098-9013-a7838dbbbd38
	I0415 19:28:36.219073    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:36.219073    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:36.219073    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:36.219666    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"649","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3404 chars]
	I0415 19:28:36.702945    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:36.702945    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:36.702945    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:36.702945    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:36.706983    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:28:36.707736    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:36.707736    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:36.707736    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:36.707736    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:36.707736    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:36.707736    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:36 GMT
	I0415 19:28:36.707736    2716 round_trippers.go:580]     Audit-Id: d745f174-5628-48e3-9bfb-7361dcddc7a3
	I0415 19:28:36.707736    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"649","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3404 chars]
	I0415 19:28:37.211118    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:37.211118    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:37.211118    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:37.211118    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:37.215882    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:28:37.215882    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:37.215882    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:37 GMT
	I0415 19:28:37.215882    2716 round_trippers.go:580]     Audit-Id: 2b28b06b-0370-4a3d-a20b-818fdac09947
	I0415 19:28:37.215882    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:37.215882    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:37.215882    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:37.215882    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:37.215882    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"649","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3404 chars]
	I0415 19:28:37.716959    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:37.716959    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:37.716959    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:37.716959    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:37.720550    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:37.721411    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:37.721411    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:37.721411    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:37.721503    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:37 GMT
	I0415 19:28:37.721503    2716 round_trippers.go:580]     Audit-Id: dc1ddb2c-0dd0-4b29-a8da-cace0712f9dd
	I0415 19:28:37.721503    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:37.721545    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:37.721797    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"649","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3404 chars]
	I0415 19:28:38.206715    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:38.206900    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:38.206900    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:38.206900    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:38.214576    2716 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0415 19:28:38.215467    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:38.215467    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:38.215467    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:38 GMT
	I0415 19:28:38.215467    2716 round_trippers.go:580]     Audit-Id: 6eac6008-9656-4627-8cf1-fa0c7ec88672
	I0415 19:28:38.215467    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:38.215467    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:38.215467    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:38.215467    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"649","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3404 chars]
	I0415 19:28:38.216340    2716 node_ready.go:53] node "multinode-841000-m02" has status "Ready":"False"
	I0415 19:28:38.709462    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:38.709462    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:38.709462    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:38.709462    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:38.713979    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:28:38.713979    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:38.713979    2716 round_trippers.go:580]     Audit-Id: 31e6718e-584e-4674-ba73-77084e9af962
	I0415 19:28:38.713979    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:38.713979    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:38.713979    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:38.713979    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:38.713979    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:38 GMT
	I0415 19:28:38.714513    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"649","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3404 chars]
	I0415 19:28:39.218580    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:39.218643    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:39.218713    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:39.218713    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:39.223558    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:28:39.223558    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:39.223558    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:39.223558    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:39 GMT
	I0415 19:28:39.223558    2716 round_trippers.go:580]     Audit-Id: 623e8c9c-09d6-4f44-8aae-dfdba2378099
	I0415 19:28:39.223558    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:39.223558    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:39.223558    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:39.223558    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"649","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3404 chars]
	I0415 19:28:39.707387    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:39.707652    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:39.707652    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:39.707652    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:39.711434    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:39.711434    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:39.711434    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:39 GMT
	I0415 19:28:39.711434    2716 round_trippers.go:580]     Audit-Id: 4b06cd31-8b77-4eef-bc8f-b5f729b6e1d5
	I0415 19:28:39.712377    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:39.712377    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:39.712377    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:39.712429    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:39.712606    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"649","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3404 chars]
	I0415 19:28:40.216889    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:40.216889    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:40.216889    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:40.216889    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:40.227554    2716 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0415 19:28:40.227978    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:40.227978    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:40.227978    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:40.227978    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:40 GMT
	I0415 19:28:40.227978    2716 round_trippers.go:580]     Audit-Id: 956aa654-e4d7-4677-a08e-5f32468c768c
	I0415 19:28:40.227978    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:40.227978    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:40.228499    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"649","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3404 chars]
	I0415 19:28:40.228499    2716 node_ready.go:53] node "multinode-841000-m02" has status "Ready":"False"
	I0415 19:28:40.708640    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:40.708939    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:40.708939    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:40.708939    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:40.714534    2716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 19:28:40.714534    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:40.714534    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:40.714534    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:40.714534    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:40.714534    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:40 GMT
	I0415 19:28:40.714534    2716 round_trippers.go:580]     Audit-Id: 8a2087c9-c25d-49d9-8c14-a0450309cb48
	I0415 19:28:40.714534    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:40.719950    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"649","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3404 chars]
	I0415 19:28:41.209398    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:41.209398    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:41.209398    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:41.209398    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:41.213014    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:41.213458    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:41.213458    2716 round_trippers.go:580]     Audit-Id: 08346594-9041-4260-adbb-6946a834593a
	I0415 19:28:41.213458    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:41.213458    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:41.213458    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:41.213458    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:41.213458    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:41 GMT
	I0415 19:28:41.213458    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"649","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3404 chars]
	I0415 19:28:41.711705    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:41.711791    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:41.711791    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:41.711791    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:41.715266    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:41.715266    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:41.716039    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:41.716039    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:41.716039    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:41.716039    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:41.716039    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:41 GMT
	I0415 19:28:41.716039    2716 round_trippers.go:580]     Audit-Id: 8c06bb01-93fa-4ea7-a1c1-1ee4439b257d
	I0415 19:28:41.716324    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"649","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3404 chars]
	I0415 19:28:42.216722    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:42.216722    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:42.216722    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:42.216722    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:42.233579    2716 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0415 19:28:42.233579    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:42.233579    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:42.233579    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:42.233579    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:42.233579    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:42 GMT
	I0415 19:28:42.233579    2716 round_trippers.go:580]     Audit-Id: d553af8d-dd6d-42aa-b933-28768c26a6af
	I0415 19:28:42.233579    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:42.233579    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"649","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3404 chars]
	I0415 19:28:42.233579    2716 node_ready.go:53] node "multinode-841000-m02" has status "Ready":"False"
	I0415 19:28:42.717567    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:42.717567    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:42.717673    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:42.717673    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:42.721012    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:42.721012    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:42.721012    2716 round_trippers.go:580]     Audit-Id: 2b84ad32-3f2f-440d-9bf7-d49c4428fbcc
	I0415 19:28:42.721012    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:42.721012    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:42.721012    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:42.721012    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:42.721780    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:42 GMT
	I0415 19:28:42.722179    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"649","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3404 chars]
	I0415 19:28:43.218108    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:43.218192    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:43.218192    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:43.218192    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:43.222562    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:28:43.222562    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:43.222562    2716 round_trippers.go:580]     Audit-Id: 209d17f8-9c3b-4339-aa4d-4f96a6324ed8
	I0415 19:28:43.222562    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:43.222562    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:43.222562    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:43.222562    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:43.222562    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:43 GMT
	I0415 19:28:43.223721    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"666","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3270 chars]
	I0415 19:28:43.224242    2716 node_ready.go:49] node "multinode-841000-m02" has status "Ready":"True"
	I0415 19:28:43.224318    2716 node_ready.go:38] duration metric: took 16.5220572s for node "multinode-841000-m02" to be "Ready" ...
	I0415 19:28:43.224378    2716 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 19:28:43.224438    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods
	I0415 19:28:43.224438    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:43.224438    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:43.224526    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:43.229646    2716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 19:28:43.229646    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:43.230651    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:43.230685    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:43.230685    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:43 GMT
	I0415 19:28:43.230685    2716 round_trippers.go:580]     Audit-Id: 632df9a5-6871-45d7-ba11-3b8ee28cdfec
	I0415 19:28:43.230685    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:43.230685    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:43.232496    2716 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"669"},"items":[{"metadata":{"name":"coredns-76f75df574-vqqtx","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"5cce6545-fec3-4334-9041-de82b0e42801","resourceVersion":"460","creationTimestamp":"2024-04-15T19:25:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"83780525-0642-4265-aa15-7ef8ee4dcb17","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:25:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83780525-0642-4265-aa15-7ef8ee4dcb17\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 70426 chars]
	I0415 19:28:43.236752    2716 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-vqqtx" in "kube-system" namespace to be "Ready" ...
	I0415 19:28:43.236752    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-vqqtx
	I0415 19:28:43.236752    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:43.236752    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:43.236752    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:43.240684    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:43.240684    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:43.241149    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:43.241149    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:43.241149    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:43.241149    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:43 GMT
	I0415 19:28:43.241149    2716 round_trippers.go:580]     Audit-Id: 14f4fb54-2353-430a-8f2a-38d2a580896b
	I0415 19:28:43.241149    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:43.241403    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-vqqtx","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"5cce6545-fec3-4334-9041-de82b0e42801","resourceVersion":"460","creationTimestamp":"2024-04-15T19:25:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"83780525-0642-4265-aa15-7ef8ee4dcb17","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:25:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83780525-0642-4265-aa15-7ef8ee4dcb17\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0415 19:28:43.241526    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:28:43.241526    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:43.241526    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:43.241526    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:43.247063    2716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 19:28:43.247063    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:43.247125    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:43 GMT
	I0415 19:28:43.247125    2716 round_trippers.go:580]     Audit-Id: e1a1b9b6-fd63-442d-aed4-2a8a1b6bcb9d
	I0415 19:28:43.247125    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:43.247125    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:43.247125    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:43.247125    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:43.247474    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"465","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4966 chars]
	I0415 19:28:43.248193    2716 pod_ready.go:92] pod "coredns-76f75df574-vqqtx" in "kube-system" namespace has status "Ready":"True"
	I0415 19:28:43.248193    2716 pod_ready.go:81] duration metric: took 11.4415ms for pod "coredns-76f75df574-vqqtx" in "kube-system" namespace to be "Ready" ...
	I0415 19:28:43.248193    2716 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-841000" in "kube-system" namespace to be "Ready" ...
	I0415 19:28:43.248390    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-841000
	I0415 19:28:43.248466    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:43.248466    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:43.248466    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:43.265925    2716 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0415 19:28:43.265925    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:43.265925    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:43.265925    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:43.266877    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:43.266877    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:43.266877    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:43 GMT
	I0415 19:28:43.266877    2716 round_trippers.go:580]     Audit-Id: 7bab3292-3d0b-421f-926b-de45869519d3
	I0415 19:28:43.267035    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-841000","namespace":"kube-system","uid":"ec0b243b-fd9f-4081-82dc-532086096935","resourceVersion":"420","creationTimestamp":"2024-04-15T19:24:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.62.237:2379","kubernetes.io/config.hash":"e14f778ba3e14a3effd052cdd14002ca","kubernetes.io/config.mirror":"e14f778ba3e14a3effd052cdd14002ca","kubernetes.io/config.seen":"2024-04-15T19:24:49.499002669Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:24:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0415 19:28:43.267475    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:28:43.267581    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:43.267581    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:43.267581    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:43.271338    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:43.271338    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:43.271338    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:43.271338    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:43 GMT
	I0415 19:28:43.271338    2716 round_trippers.go:580]     Audit-Id: dfedf64e-cfee-4548-9692-b7a564c28054
	I0415 19:28:43.271338    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:43.271338    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:43.271338    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:43.271830    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"465","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4966 chars]
	I0415 19:28:43.271830    2716 pod_ready.go:92] pod "etcd-multinode-841000" in "kube-system" namespace has status "Ready":"True"
	I0415 19:28:43.271830    2716 pod_ready.go:81] duration metric: took 23.6365ms for pod "etcd-multinode-841000" in "kube-system" namespace to be "Ready" ...
	I0415 19:28:43.271830    2716 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-841000" in "kube-system" namespace to be "Ready" ...
	I0415 19:28:43.272389    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-841000
	I0415 19:28:43.272389    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:43.272559    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:43.272559    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:43.275770    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:43.275932    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:43.275932    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:43 GMT
	I0415 19:28:43.275932    2716 round_trippers.go:580]     Audit-Id: 461212d4-49b0-41c7-aab6-486c3fe219dd
	I0415 19:28:43.275932    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:43.275932    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:43.276007    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:43.276007    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:43.276066    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-841000","namespace":"kube-system","uid":"092f3aee-b99d-4e46-b42d-ae1b3e2f6c8b","resourceVersion":"419","creationTimestamp":"2024-04-15T19:24:59Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.62.237:8443","kubernetes.io/config.hash":"c06ba545f7155478447169e98c788e3f","kubernetes.io/config.mirror":"c06ba545f7155478447169e98c788e3f","kubernetes.io/config.seen":"2024-04-15T19:24:59.013465769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:24:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0415 19:28:43.276959    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:28:43.276959    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:43.276959    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:43.276959    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:43.280853    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:43.280853    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:43.280853    2716 round_trippers.go:580]     Audit-Id: ec708104-8fd3-41a8-95eb-a6c66790b9c8
	I0415 19:28:43.280853    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:43.280853    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:43.280853    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:43.280853    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:43.280853    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:43 GMT
	I0415 19:28:43.280853    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"465","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4966 chars]
	I0415 19:28:43.281857    2716 pod_ready.go:92] pod "kube-apiserver-multinode-841000" in "kube-system" namespace has status "Ready":"True"
	I0415 19:28:43.281857    2716 pod_ready.go:81] duration metric: took 10.0268ms for pod "kube-apiserver-multinode-841000" in "kube-system" namespace to be "Ready" ...
	I0415 19:28:43.281857    2716 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-841000" in "kube-system" namespace to be "Ready" ...
	I0415 19:28:43.281857    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-841000
	I0415 19:28:43.281857    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:43.281857    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:43.281857    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:43.285141    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:43.285141    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:43.285141    2716 round_trippers.go:580]     Audit-Id: 5d51c5b8-8724-499e-b1f2-f63ccbe19b15
	I0415 19:28:43.285141    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:43.285141    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:43.285141    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:43.286119    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:43.286119    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:43 GMT
	I0415 19:28:43.286119    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-841000","namespace":"kube-system","uid":"8922765c-684e-491a-83a0-e06cec665bbd","resourceVersion":"417","creationTimestamp":"2024-04-15T19:24:59Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9d43b7787e40d9d062807a067e1e26cc","kubernetes.io/config.mirror":"9d43b7787e40d9d062807a067e1e26cc","kubernetes.io/config.seen":"2024-04-15T19:24:59.013467070Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:24:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0415 19:28:43.286837    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:28:43.286837    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:43.286837    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:43.286837    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:43.289407    2716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 19:28:43.289407    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:43.289407    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:43.289407    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:43.289407    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:43.289407    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:43.290446    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:43 GMT
	I0415 19:28:43.290446    2716 round_trippers.go:580]     Audit-Id: 2c1b48e9-a61b-4f99-a456-7e5f3b9f5c34
	I0415 19:28:43.290584    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"465","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4966 chars]
	I0415 19:28:43.290869    2716 pod_ready.go:92] pod "kube-controller-manager-multinode-841000" in "kube-system" namespace has status "Ready":"True"
	I0415 19:28:43.290989    2716 pod_ready.go:81] duration metric: took 9.1322ms for pod "kube-controller-manager-multinode-841000" in "kube-system" namespace to be "Ready" ...
	I0415 19:28:43.290989    2716 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7v79z" in "kube-system" namespace to be "Ready" ...
	I0415 19:28:43.420912    2716 request.go:629] Waited for 129.4443ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7v79z
	I0415 19:28:43.421046    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7v79z
	I0415 19:28:43.421046    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:43.421046    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:43.421046    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:43.425434    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:28:43.425434    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:43.425434    2716 round_trippers.go:580]     Audit-Id: cfb9c58b-b8f5-4d5f-809e-cf190b11fef0
	I0415 19:28:43.425434    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:43.425434    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:43.425434    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:43.425434    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:43.425434    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:43 GMT
	I0415 19:28:43.426273    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7v79z","generateName":"kube-proxy-","namespace":"kube-system","uid":"0a08abf8-9fa3-4fab-86cc-1b709bc0d263","resourceVersion":"414","creationTimestamp":"2024-04-15T19:25:12Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c07d15d0-ec90-403c-8aa0-1c81c17e9eec","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:25:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c07d15d0-ec90-403c-8aa0-1c81c17e9eec\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0415 19:28:43.625724    2716 request.go:629] Waited for 198.2211ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:28:43.625805    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:28:43.625805    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:43.625892    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:43.625892    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:43.629742    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:43.629742    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:43.630767    2716 round_trippers.go:580]     Audit-Id: ed530d90-14b1-49e2-9b3a-3486a68617cf
	I0415 19:28:43.630767    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:43.630767    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:43.630767    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:43.630767    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:43.630767    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:43 GMT
	I0415 19:28:43.631016    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"465","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4966 chars]
	I0415 19:28:43.631471    2716 pod_ready.go:92] pod "kube-proxy-7v79z" in "kube-system" namespace has status "Ready":"True"
	I0415 19:28:43.631581    2716 pod_ready.go:81] duration metric: took 340.4714ms for pod "kube-proxy-7v79z" in "kube-system" namespace to be "Ready" ...
	I0415 19:28:43.631581    2716 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mbmcg" in "kube-system" namespace to be "Ready" ...
	I0415 19:28:43.830671    2716 request.go:629] Waited for 198.7625ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mbmcg
	I0415 19:28:43.830927    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mbmcg
	I0415 19:28:43.830968    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:43.830968    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:43.830968    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:43.835626    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:28:43.835626    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:43.835626    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:43.835626    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:43.835626    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:43.835626    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:43.835996    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:43 GMT
	I0415 19:28:43.835996    2716 round_trippers.go:580]     Audit-Id: 4dbeb3e6-25e6-4b1d-b7ab-1030b696086d
	I0415 19:28:43.836105    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mbmcg","generateName":"kube-proxy-","namespace":"kube-system","uid":"893d185a-0a7b-4fbf-b2d9-824070c9ddd8","resourceVersion":"654","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c07d15d0-ec90-403c-8aa0-1c81c17e9eec","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c07d15d0-ec90-403c-8aa0-1c81c17e9eec\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5836 chars]
	I0415 19:28:44.020840    2716 request.go:629] Waited for 184.0027ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:44.021037    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000-m02
	I0415 19:28:44.021118    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:44.021160    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:44.021181    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:44.025831    2716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 19:28:44.025831    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:44.025831    2716 round_trippers.go:580]     Audit-Id: 36bdbede-c929-4d81-b85f-afc4195d0e85
	I0415 19:28:44.026083    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:44.026083    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:44.026083    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:44.026083    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:44.026083    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:44 GMT
	I0415 19:28:44.026449    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000-m02","uid":"32909ea8-d59a-41b7-ab53-08e0f27ed3b9","resourceVersion":"666","creationTimestamp":"2024-04-15T19:28:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_15T19_28_26_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:28:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3270 chars]
	I0415 19:28:44.026937    2716 pod_ready.go:92] pod "kube-proxy-mbmcg" in "kube-system" namespace has status "Ready":"True"
	I0415 19:28:44.027018    2716 pod_ready.go:81] duration metric: took 395.4337ms for pod "kube-proxy-mbmcg" in "kube-system" namespace to be "Ready" ...
	I0415 19:28:44.027018    2716 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-841000" in "kube-system" namespace to be "Ready" ...
	I0415 19:28:44.224930    2716 request.go:629] Waited for 197.8194ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-841000
	I0415 19:28:44.225386    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-841000
	I0415 19:28:44.225437    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:44.225437    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:44.225437    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:44.229597    2716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 19:28:44.229597    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:44.229597    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:44.229597    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:44.229597    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:44 GMT
	I0415 19:28:44.229597    2716 round_trippers.go:580]     Audit-Id: fbe26691-9718-44fc-9f96-1c2c3f5dca72
	I0415 19:28:44.229597    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:44.229597    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:44.230507    2716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-841000","namespace":"kube-system","uid":"67374ab1-2ea0-4b43-82b8-1b666d274f2f","resourceVersion":"418","creationTimestamp":"2024-04-15T19:24:59Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4a04a4641e7860cc5b6e00042829e3c0","kubernetes.io/config.mirror":"4a04a4641e7860cc5b6e00042829e3c0","kubernetes.io/config.seen":"2024-04-15T19:24:59.013468170Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T19:24:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0415 19:28:44.428468    2716 request.go:629] Waited for 197.7739ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:28:44.428599    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes/multinode-841000
	I0415 19:28:44.428599    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:44.428599    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:44.428657    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:44.434813    2716 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 19:28:44.434813    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:44.434813    2716 round_trippers.go:580]     Audit-Id: ec1e40c8-cdf3-48d7-be78-0e49766d2cd7
	I0415 19:28:44.434813    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:44.434813    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:44.434813    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:44.434813    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:44.434813    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:44 GMT
	I0415 19:28:44.434813    2716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"465","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-15T19:24:54Z","fieldsType":"Fields [truncated 4966 chars]
	I0415 19:28:44.435497    2716 pod_ready.go:92] pod "kube-scheduler-multinode-841000" in "kube-system" namespace has status "Ready":"True"
	I0415 19:28:44.435497    2716 pod_ready.go:81] duration metric: took 408.4753ms for pod "kube-scheduler-multinode-841000" in "kube-system" namespace to be "Ready" ...
	I0415 19:28:44.435497    2716 pod_ready.go:38] duration metric: took 1.211109s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 19:28:44.435497    2716 system_svc.go:44] waiting for kubelet service to be running ....
	I0415 19:28:44.450350    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 19:28:44.476836    2716 system_svc.go:56] duration metric: took 41.3388ms WaitForService to wait for kubelet
	I0415 19:28:44.476836    2716 kubeadm.go:576] duration metric: took 18.0752139s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 19:28:44.477785    2716 node_conditions.go:102] verifying NodePressure condition ...
	I0415 19:28:44.632396    2716 request.go:629] Waited for 154.2957ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.62.237:8443/api/v1/nodes
	I0415 19:28:44.632483    2716 round_trippers.go:463] GET https://172.19.62.237:8443/api/v1/nodes
	I0415 19:28:44.632483    2716 round_trippers.go:469] Request Headers:
	I0415 19:28:44.632483    2716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 19:28:44.632483    2716 round_trippers.go:473]     Accept: application/json, */*
	I0415 19:28:44.638407    2716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 19:28:44.638407    2716 round_trippers.go:577] Response Headers:
	I0415 19:28:44.638555    2716 round_trippers.go:580]     Audit-Id: dd4cc3b7-0be6-4412-8032-f98466538598
	I0415 19:28:44.638578    2716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 19:28:44.638603    2716 round_trippers.go:580]     Content-Type: application/json
	I0415 19:28:44.638603    2716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 844bc784-31b2-4292-a3ac-de1a3b62e3b3
	I0415 19:28:44.638603    2716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e1bdb937-33a9-493d-9057-d7f77b910de9
	I0415 19:28:44.638648    2716 round_trippers.go:580]     Date: Mon, 15 Apr 2024 19:28:44 GMT
	I0415 19:28:44.638697    2716 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"670"},"items":[{"metadata":{"name":"multinode-841000","uid":"3738f089-af56-4d3c-9376-425ecbcb02ba","resourceVersion":"465","creationTimestamp":"2024-04-15T19:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-841000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"multinode-841000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T19_24_59_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"mana
gedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1" [truncated 9281 chars]
	I0415 19:28:44.639417    2716 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0415 19:28:44.639417    2716 node_conditions.go:123] node cpu capacity is 2
	I0415 19:28:44.639417    2716 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0415 19:28:44.639417    2716 node_conditions.go:123] node cpu capacity is 2
	I0415 19:28:44.639417    2716 node_conditions.go:105] duration metric: took 161.6306ms to run NodePressure ...
	I0415 19:28:44.639417    2716 start.go:240] waiting for startup goroutines ...
	I0415 19:28:44.639979    2716 start.go:254] writing updated cluster config ...
	I0415 19:28:44.655358    2716 ssh_runner.go:195] Run: rm -f paused
	I0415 19:28:44.820600    2716 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0415 19:28:44.830125    2716 out.go:177] * Done! kubectl is now configured to use "multinode-841000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 15 19:25:27 multinode-841000 dockerd[1334]: time="2024-04-15T19:25:27.843113167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 15 19:25:27 multinode-841000 dockerd[1334]: time="2024-04-15T19:25:27.843201869Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 19:25:27 multinode-841000 dockerd[1334]: time="2024-04-15T19:25:27.843222470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 19:25:27 multinode-841000 dockerd[1334]: time="2024-04-15T19:25:27.843432375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 19:25:27 multinode-841000 dockerd[1334]: time="2024-04-15T19:25:27.845331519Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 15 19:25:27 multinode-841000 dockerd[1334]: time="2024-04-15T19:25:27.845486623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 19:25:27 multinode-841000 dockerd[1334]: time="2024-04-15T19:25:27.845504223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 19:25:27 multinode-841000 dockerd[1334]: time="2024-04-15T19:25:27.845878532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 19:29:11 multinode-841000 dockerd[1334]: time="2024-04-15T19:29:11.971951741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 15 19:29:11 multinode-841000 dockerd[1334]: time="2024-04-15T19:29:11.972243544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 19:29:11 multinode-841000 dockerd[1334]: time="2024-04-15T19:29:11.972268944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 19:29:11 multinode-841000 dockerd[1334]: time="2024-04-15T19:29:11.973312853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 19:29:12 multinode-841000 cri-dockerd[1235]: time="2024-04-15T19:29:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3830cdbfba8a40c644fcba4f515494e825b7b2f795c752165479000bcabc8533/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 15 19:29:13 multinode-841000 cri-dockerd[1235]: time="2024-04-15T19:29:13Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 15 19:29:13 multinode-841000 dockerd[1334]: time="2024-04-15T19:29:13.538138188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 15 19:29:13 multinode-841000 dockerd[1334]: time="2024-04-15T19:29:13.538328490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 19:29:13 multinode-841000 dockerd[1334]: time="2024-04-15T19:29:13.538353390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 19:29:13 multinode-841000 dockerd[1334]: time="2024-04-15T19:29:13.538496891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 19:30:04 multinode-841000 dockerd[1328]: 2024/04/15 19:30:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 19:30:04 multinode-841000 dockerd[1328]: 2024/04/15 19:30:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 19:30:04 multinode-841000 dockerd[1328]: 2024/04/15 19:30:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 19:30:04 multinode-841000 dockerd[1328]: 2024/04/15 19:30:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 19:30:04 multinode-841000 dockerd[1328]: 2024/04/15 19:30:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 19:30:04 multinode-841000 dockerd[1328]: 2024/04/15 19:30:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 19:30:04 multinode-841000 dockerd[1328]: 2024/04/15 19:30:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	89943bb7b3d8d       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Running             busybox                   0                   3830cdbfba8a4       busybox-7fdf7869d9-gkn8h
	023c483d6cc6b       cbb01a7bd410d                                                                                         21 minutes ago      Running             coredns                   0                   8e500689099df       coredns-76f75df574-vqqtx
	13b8950243469       6e38f40d628db                                                                                         21 minutes ago      Running             storage-provisioner       0                   eaba3da43a795       storage-provisioner
	6ed282cec4581       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              21 minutes ago      Running             kindnet-cni               0                   0eee7b8b55814       kindnet-zrzd6
	cc8a027d4211d       a1d263b5dc5b0                                                                                         22 minutes ago      Running             kube-proxy                0                   433adb937eeae       kube-proxy-7v79z
	8d334a05315f6       8c390d98f50c0                                                                                         22 minutes ago      Running             kube-scheduler            0                   a4fe4cd1aa4c5       kube-scheduler-multinode-841000
	af7b5d2bf03e6       6052a25da3f97                                                                                         22 minutes ago      Running             kube-controller-manager   0                   58667570745a9       kube-controller-manager-multinode-841000
	6867880d79723       39f995c9f1996                                                                                         22 minutes ago      Running             kube-apiserver            0                   b367a28f9f2e7       kube-apiserver-multinode-841000
	230daf2c59cd5       3861cfcd7c04c                                                                                         22 minutes ago      Running             etcd                      0                   ff71106bb6df0       etcd-multinode-841000
	
	
	==> coredns [023c483d6cc6] <==
	[INFO] 10.244.0.3:49964 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000208702s
	[INFO] 10.244.1.2:60232 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000278402s
	[INFO] 10.244.1.2:51509 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000287102s
	[INFO] 10.244.1.2:52348 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141101s
	[INFO] 10.244.1.2:34223 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000185102s
	[INFO] 10.244.1.2:50171 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000172102s
	[INFO] 10.244.1.2:47185 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000249302s
	[INFO] 10.244.1.2:44434 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000222602s
	[INFO] 10.244.1.2:32889 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000157502s
	[INFO] 10.244.0.3:39242 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000220802s
	[INFO] 10.244.0.3:56718 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000238002s
	[INFO] 10.244.0.3:33231 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000967s
	[INFO] 10.244.0.3:52683 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060101s
	[INFO] 10.244.1.2:59976 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134601s
	[INFO] 10.244.1.2:45235 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000302602s
	[INFO] 10.244.1.2:35171 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000260702s
	[INFO] 10.244.1.2:48805 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000762s
	[INFO] 10.244.0.3:60616 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000246402s
	[INFO] 10.244.0.3:36380 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000114301s
	[INFO] 10.244.0.3:47182 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000090801s
	[INFO] 10.244.0.3:55760 - 5 "PTR IN 1.48.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000092601s
	[INFO] 10.244.1.2:35347 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146601s
	[INFO] 10.244.1.2:56464 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000288102s
	[INFO] 10.244.1.2:54660 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000709s
	[INFO] 10.244.1.2:43202 - 5 "PTR IN 1.48.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000058601s
	
	
	==> describe nodes <==
	Name:               multinode-841000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-841000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c
	                    minikube.k8s.io/name=multinode-841000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_15T19_24_59_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 19:24:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-841000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Apr 2024 19:47:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Apr 2024 19:44:51 +0000   Mon, 15 Apr 2024 19:24:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Apr 2024 19:44:51 +0000   Mon, 15 Apr 2024 19:24:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Apr 2024 19:44:51 +0000   Mon, 15 Apr 2024 19:24:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Apr 2024 19:44:51 +0000   Mon, 15 Apr 2024 19:25:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.62.237
	  Hostname:    multinode-841000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 f7c14b12674c41e0878785eed7d197fc
	  System UUID:                4a57c417-cda2-a24a-90d7-fc6ccd0391d4
	  Boot ID:                    0f92915c-52b2-4e4c-acc7-87e8e0ff34dd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-gkn8h                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-76f75df574-vqqtx                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-multinode-841000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kindnet-zrzd6                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      22m
	  kube-system                 kube-apiserver-multinode-841000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-multinode-841000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-7v79z                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-multinode-841000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node multinode-841000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node multinode-841000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node multinode-841000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node multinode-841000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node multinode-841000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet          Node multinode-841000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m                node-controller  Node multinode-841000 event: Registered Node multinode-841000 in Controller
	  Normal  NodeReady                21m                kubelet          Node multinode-841000 status is now: NodeReady
	
	
	Name:               multinode-841000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-841000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c
	                    minikube.k8s.io/name=multinode-841000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_15T19_28_26_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 19:28:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-841000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Apr 2024 19:47:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Apr 2024 19:44:45 +0000   Mon, 15 Apr 2024 19:28:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Apr 2024 19:44:45 +0000   Mon, 15 Apr 2024 19:28:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Apr 2024 19:44:45 +0000   Mon, 15 Apr 2024 19:28:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Apr 2024 19:44:45 +0000   Mon, 15 Apr 2024 19:28:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.55.167
	  Hostname:    multinode-841000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 371c8e1d12f1450088f192415d94b9af
	  System UUID:                740c74a4-1425-a745-bde4-543f010981ea
	  Boot ID:                    263a0e94-df3a-46b2-99db-47f12924e038
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-hfpk6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kindnet-2cgqg               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      18m
	  kube-system                 kube-proxy-mbmcg            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientMemory  18m (x2 over 18m)  kubelet          Node multinode-841000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x2 over 18m)  kubelet          Node multinode-841000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x2 over 18m)  kubelet          Node multinode-841000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                node-controller  Node multinode-841000-m02 event: Registered Node multinode-841000-m02 in Controller
	  Normal  NodeReady                18m                kubelet          Node multinode-841000-m02 status is now: NodeReady
	
	
	Name:               multinode-841000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-841000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c
	                    minikube.k8s.io/name=multinode-841000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_15T19_33_24_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 19:33:23 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-841000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Apr 2024 19:41:23 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 15 Apr 2024 19:38:59 +0000   Mon, 15 Apr 2024 19:42:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 15 Apr 2024 19:38:59 +0000   Mon, 15 Apr 2024 19:42:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 15 Apr 2024 19:38:59 +0000   Mon, 15 Apr 2024 19:42:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 15 Apr 2024 19:38:59 +0000   Mon, 15 Apr 2024 19:42:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.19.60.4
	  Hostname:    multinode-841000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 d1a223131ec4472394470ece0889dfac
	  System UUID:                d6fd5253-fa26-0749-89be-d3712131a459
	  Boot ID:                    5445a99d-c0b7-4c30-bfca-c0778f3a4c12
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8mwsh       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-9rtqj    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x2 over 13m)  kubelet          Node multinode-841000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x2 over 13m)  kubelet          Node multinode-841000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x2 over 13m)  kubelet          Node multinode-841000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node multinode-841000-m03 event: Registered Node multinode-841000-m03 in Controller
	  Normal  NodeReady                13m                kubelet          Node multinode-841000-m03 status is now: NodeReady
	  Normal  NodeNotReady             5m8s               node-controller  Node multinode-841000-m03 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr15 19:23] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.201130] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[Apr15 19:24] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.134096] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.670575] systemd-fstab-generator[989]: Ignoring "noauto" option for root device
	[  +0.220137] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +0.266682] systemd-fstab-generator[1015]: Ignoring "noauto" option for root device
	[  +2.959731] systemd-fstab-generator[1188]: Ignoring "noauto" option for root device
	[  +0.248545] systemd-fstab-generator[1200]: Ignoring "noauto" option for root device
	[  +0.216396] systemd-fstab-generator[1213]: Ignoring "noauto" option for root device
	[  +0.321934] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +0.112117] kauditd_printk_skb: 183 callbacks suppressed
	[ +11.861909] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[  +0.125408] kauditd_printk_skb: 12 callbacks suppressed
	[  +3.369882] systemd-fstab-generator[1518]: Ignoring "noauto" option for root device
	[  +6.857961] systemd-fstab-generator[1719]: Ignoring "noauto" option for root device
	[  +0.117218] kauditd_printk_skb: 73 callbacks suppressed
	[  +9.878328] systemd-fstab-generator[2130]: Ignoring "noauto" option for root device
	[  +0.155563] kauditd_printk_skb: 62 callbacks suppressed
	[Apr15 19:25] systemd-fstab-generator[2318]: Ignoring "noauto" option for root device
	[  +0.165691] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.054875] kauditd_printk_skb: 51 callbacks suppressed
	[Apr15 19:29] kauditd_printk_skb: 14 callbacks suppressed
	[Apr15 19:36] hrtimer: interrupt took 1466214 ns
	
	
	==> etcd [230daf2c59cd] <==
	{"level":"warn","ts":"2024-04-15T19:33:27.85594Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"249.474019ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-15T19:33:27.855962Z","caller":"traceutil/trace.go:171","msg":"trace[931671598] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:994; }","duration":"249.519219ms","start":"2024-04-15T19:33:27.606436Z","end":"2024-04-15T19:33:27.855955Z","steps":["trace[931671598] 'agreement among raft nodes before linearized reading'  (duration: 249.486419ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T19:33:28.20399Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"241.100844ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-841000-m03\" ","response":"range_response_count:1 size:2981"}
	{"level":"info","ts":"2024-04-15T19:33:28.20408Z","caller":"traceutil/trace.go:171","msg":"trace[650792872] range","detail":"{range_begin:/registry/minions/multinode-841000-m03; range_end:; response_count:1; response_revision:994; }","duration":"241.225345ms","start":"2024-04-15T19:33:27.96284Z","end":"2024-04-15T19:33:28.204066Z","steps":["trace[650792872] 'range keys from in-memory index tree'  (duration: 241.012344ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T19:33:34.195651Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"392.862597ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11172461882149941317 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/multinode-841000-m03\" mod_revision:994 > success:<request_put:<key:\"/registry/minions/multinode-841000-m03\" value_size:3092 >> failure:<request_range:<key:\"/registry/minions/multinode-841000-m03\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-15T19:33:34.195725Z","caller":"traceutil/trace.go:171","msg":"trace[1639307124] linearizableReadLoop","detail":"{readStateIndex:1126; appliedIndex:1125; }","duration":"238.317922ms","start":"2024-04-15T19:33:33.957392Z","end":"2024-04-15T19:33:34.19571Z","steps":["trace[1639307124] 'read index received'  (duration: 33.3µs)","trace[1639307124] 'applied index is now lower than readState.Index'  (duration: 238.283522ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-15T19:33:34.195806Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"238.427423ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-841000-m03\" ","response":"range_response_count:1 size:3153"}
	{"level":"info","ts":"2024-04-15T19:33:34.195827Z","caller":"traceutil/trace.go:171","msg":"trace[886704193] range","detail":"{range_begin:/registry/minions/multinode-841000-m03; range_end:; response_count:1; response_revision:1003; }","duration":"238.473123ms","start":"2024-04-15T19:33:33.957347Z","end":"2024-04-15T19:33:34.19582Z","steps":["trace[886704193] 'agreement among raft nodes before linearized reading'  (duration: 238.398922ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T19:33:34.196044Z","caller":"traceutil/trace.go:171","msg":"trace[65242955] transaction","detail":"{read_only:false; response_revision:1003; number_of_response:1; }","duration":"522.238849ms","start":"2024-04-15T19:33:33.673789Z","end":"2024-04-15T19:33:34.196028Z","steps":["trace[65242955] 'process raft request'  (duration: 128.889847ms)","trace[65242955] 'compare'  (duration: 392.465294ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-15T19:33:34.196111Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T19:33:33.673768Z","time spent":"522.30525ms","remote":"127.0.0.1:58364","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3138,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-841000-m03\" mod_revision:994 > success:<request_put:<key:\"/registry/minions/multinode-841000-m03\" value_size:3092 >> failure:<request_range:<key:\"/registry/minions/multinode-841000-m03\" > >"}
	{"level":"info","ts":"2024-04-15T19:34:52.135889Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":767}
	{"level":"info","ts":"2024-04-15T19:34:52.175768Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":767,"took":"38.601847ms","hash":4032446207,"current-db-size-bytes":2625536,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2625536,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-04-15T19:34:52.17591Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4032446207,"revision":767,"compact-revision":-1}
	{"level":"info","ts":"2024-04-15T19:39:52.167404Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1099}
	{"level":"info","ts":"2024-04-15T19:39:52.18476Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1099,"took":"16.575952ms","hash":2188793870,"current-db-size-bytes":2625536,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1871872,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2024-04-15T19:39:52.184967Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2188793870,"revision":1099,"compact-revision":767}
	{"level":"info","ts":"2024-04-15T19:41:36.410164Z","caller":"traceutil/trace.go:171","msg":"trace[433175605] transaction","detail":"{read_only:false; response_revision:1501; number_of_response:1; }","duration":"134.69754ms","start":"2024-04-15T19:41:36.275444Z","end":"2024-04-15T19:41:36.410142Z","steps":["trace[433175605] 'process raft request'  (duration: 134.505838ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T19:41:37.556123Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.467931ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11172461882149943471 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.19.62.237\" mod_revision:1493 > success:<request_put:<key:\"/registry/masterleases/172.19.62.237\" value_size:66 lease:1949089845295167661 >> failure:<request_range:<key:\"/registry/masterleases/172.19.62.237\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-15T19:41:37.556236Z","caller":"traceutil/trace.go:171","msg":"trace[1629106161] transaction","detail":"{read_only:false; response_revision:1502; number_of_response:1; }","duration":"391.0241ms","start":"2024-04-15T19:41:37.165195Z","end":"2024-04-15T19:41:37.556219Z","steps":["trace[1629106161] 'process raft request'  (duration: 235.524268ms)","trace[1629106161] 'compare'  (duration: 155.242929ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-15T19:41:37.55703Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T19:41:37.165177Z","time spent":"391.543304ms","remote":"127.0.0.1:58218","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/masterleases/172.19.62.237\" mod_revision:1493 > success:<request_put:<key:\"/registry/masterleases/172.19.62.237\" value_size:66 lease:1949089845295167661 >> failure:<request_range:<key:\"/registry/masterleases/172.19.62.237\" > >"}
	{"level":"warn","ts":"2024-04-15T19:41:37.830986Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.382837ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-15T19:41:37.831913Z","caller":"traceutil/trace.go:171","msg":"trace[846544224] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; response_count:0; response_revision:1502; }","duration":"135.318145ms","start":"2024-04-15T19:41:37.696575Z","end":"2024-04-15T19:41:37.831893Z","steps":["trace[846544224] 'count revisions from in-memory index tree'  (duration: 134.272636ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T19:44:52.185992Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1399}
	{"level":"info","ts":"2024-04-15T19:44:52.195074Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1399,"took":"8.64518ms","hash":3468664424,"current-db-size-bytes":2625536,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1753088,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-04-15T19:44:52.195296Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3468664424,"revision":1399,"compact-revision":1099}
	
	
	==> kernel <==
	 19:47:16 up 24 min,  0 users,  load average: 0.19, 0.41, 0.35
	Linux multinode-841000 5.10.207 #1 SMP Mon Apr 15 15:01:07 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6ed282cec458] <==
	I0415 19:46:33.582218       1 main.go:250] Node multinode-841000-m03 has CIDR [10.244.2.0/24] 
	I0415 19:46:43.589563       1 main.go:223] Handling node with IPs: map[172.19.62.237:{}]
	I0415 19:46:43.589774       1 main.go:227] handling current node
	I0415 19:46:43.589791       1 main.go:223] Handling node with IPs: map[172.19.55.167:{}]
	I0415 19:46:43.589801       1 main.go:250] Node multinode-841000-m02 has CIDR [10.244.1.0/24] 
	I0415 19:46:43.590043       1 main.go:223] Handling node with IPs: map[172.19.60.4:{}]
	I0415 19:46:43.590080       1 main.go:250] Node multinode-841000-m03 has CIDR [10.244.2.0/24] 
	I0415 19:46:53.599439       1 main.go:223] Handling node with IPs: map[172.19.62.237:{}]
	I0415 19:46:53.600562       1 main.go:227] handling current node
	I0415 19:46:53.600900       1 main.go:223] Handling node with IPs: map[172.19.55.167:{}]
	I0415 19:46:53.601015       1 main.go:250] Node multinode-841000-m02 has CIDR [10.244.1.0/24] 
	I0415 19:46:53.601550       1 main.go:223] Handling node with IPs: map[172.19.60.4:{}]
	I0415 19:46:53.602013       1 main.go:250] Node multinode-841000-m03 has CIDR [10.244.2.0/24] 
	I0415 19:47:03.617902       1 main.go:223] Handling node with IPs: map[172.19.62.237:{}]
	I0415 19:47:03.617996       1 main.go:227] handling current node
	I0415 19:47:03.618012       1 main.go:223] Handling node with IPs: map[172.19.55.167:{}]
	I0415 19:47:03.618073       1 main.go:250] Node multinode-841000-m02 has CIDR [10.244.1.0/24] 
	I0415 19:47:03.618324       1 main.go:223] Handling node with IPs: map[172.19.60.4:{}]
	I0415 19:47:03.618554       1 main.go:250] Node multinode-841000-m03 has CIDR [10.244.2.0/24] 
	I0415 19:47:13.631155       1 main.go:223] Handling node with IPs: map[172.19.62.237:{}]
	I0415 19:47:13.631262       1 main.go:227] handling current node
	I0415 19:47:13.631277       1 main.go:223] Handling node with IPs: map[172.19.55.167:{}]
	I0415 19:47:13.631286       1 main.go:250] Node multinode-841000-m02 has CIDR [10.244.1.0/24] 
	I0415 19:47:13.631569       1 main.go:223] Handling node with IPs: map[172.19.60.4:{}]
	I0415 19:47:13.631640       1 main.go:250] Node multinode-841000-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [6867880d7972] <==
	I0415 19:24:56.790361       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0415 19:24:56.982171       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0415 19:24:56.995853       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.62.237]
	I0415 19:24:56.997482       1 controller.go:624] quota admission added evaluator for: endpoints
	I0415 19:24:57.010333       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0415 19:24:57.503685       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0415 19:24:58.933886       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0415 19:24:58.968275       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0415 19:24:58.996233       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0415 19:25:12.001163       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0415 19:25:12.223394       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0415 19:33:27.857737       1 trace.go:236] Trace[546576419]: "Patch" accept:application/vnd.kubernetes.protobuf, */*,audit-id:19057fca-2c03-4373-b532-72eb422a3124,client:172.19.62.237,api-group:,api-version:v1,name:multinode-841000-m03,subresource:,namespace:,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/multinode-841000-m03,user-agent:kube-controller-manager/v1.29.3 (linux/amd64) kubernetes/6813625/system:serviceaccount:kube-system:node-controller,verb:PATCH (15-Apr-2024 19:33:27.304) (total time: 552ms):
	Trace[546576419]: ["GuaranteedUpdate etcd3" audit-id:19057fca-2c03-4373-b532-72eb422a3124,key:/minions/multinode-841000-m03,type:*core.Node,resource:nodes 552ms (19:33:27.305)
	Trace[546576419]:  ---"Txn call completed" 548ms (19:33:27.857)]
	Trace[546576419]: ---"Object stored in database" 549ms (19:33:27.857)
	Trace[546576419]: [552.724215ms] [552.724215ms] END
	I0415 19:33:34.197346       1 trace.go:236] Trace[1310190236]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:fb5ba7e4-b58f-4177-b822-47f636498b79,client:172.19.60.4,api-group:,api-version:v1,name:multinode-841000-m03,subresource:status,namespace:,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/multinode-841000-m03/status,user-agent:kubelet/v1.29.3 (linux/amd64) kubernetes/6813625,verb:PATCH (15-Apr-2024 19:33:33.667) (total time: 529ms):
	Trace[1310190236]: ["GuaranteedUpdate etcd3" audit-id:fb5ba7e4-b58f-4177-b822-47f636498b79,key:/minions/multinode-841000-m03,type:*core.Node,resource:nodes 529ms (19:33:33.668)
	Trace[1310190236]:  ---"Txn call completed" 523ms (19:33:34.196)]
	Trace[1310190236]: ---"Object stored in database" 524ms (19:33:34.196)
	Trace[1310190236]: [529.214411ms] [529.214411ms] END
	I0415 19:41:37.557845       1 trace.go:236] Trace[1950955017]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.19.62.237,type:*v1.Endpoints,resource:apiServerIPInfo (15-Apr-2024 19:41:37.043) (total time: 514ms):
	Trace[1950955017]: ---"Transaction prepared" 118ms (19:41:37.164)
	Trace[1950955017]: ---"Txn call completed" 393ms (19:41:37.557)
	Trace[1950955017]: [514.155633ms] [514.155633ms] END
	
	
	==> kube-controller-manager [af7b5d2bf03e] <==
	I0415 19:28:27.100265       1 event.go:376] "Event occurred" object="multinode-841000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-841000-m02 event: Registered Node multinode-841000-m02 in Controller"
	I0415 19:28:42.950027       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-841000-m02"
	I0415 19:29:11.301858       1 event.go:376] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7fdf7869d9 to 2"
	I0415 19:29:11.329527       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-hfpk6"
	I0415 19:29:11.369131       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-gkn8h"
	I0415 19:29:11.379268       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="78.242453ms"
	I0415 19:29:11.422629       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="42.798757ms"
	I0415 19:29:11.455694       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="32.400371ms"
	I0415 19:29:11.456533       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="672.706µs"
	I0415 19:29:14.226778       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="19.663563ms"
	I0415 19:29:14.227467       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="166.401µs"
	I0415 19:29:14.304800       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="8.761072ms"
	I0415 19:29:14.306468       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="150.201µs"
	I0415 19:33:23.448005       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-841000-m02"
	I0415 19:33:23.448243       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-841000-m03\" does not exist"
	I0415 19:33:23.520908       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-9rtqj"
	I0415 19:33:23.520973       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-8mwsh"
	I0415 19:33:23.532851       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-841000-m03" podCIDRs=["10.244.2.0/24"]
	I0415 19:33:27.268755       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-841000-m03"
	I0415 19:33:27.268840       1 event.go:376] "Event occurred" object="multinode-841000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-841000-m03 event: Registered Node multinode-841000-m03 in Controller"
	I0415 19:33:46.633650       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-841000-m02"
	I0415 19:42:07.407059       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-841000-m02"
	I0415 19:42:07.407312       1 event.go:376] "Event occurred" object="multinode-841000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-841000-m03 status is now: NodeNotReady"
	I0415 19:42:07.427680       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-9rtqj" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0415 19:42:07.449278       1 event.go:376] "Event occurred" object="kube-system/kindnet-8mwsh" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-proxy [cc8a027d4211] <==
	I0415 19:25:14.944883       1 server_others.go:72] "Using iptables proxy"
	I0415 19:25:14.961420       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.62.237"]
	I0415 19:25:15.076544       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0415 19:25:15.076703       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0415 19:25:15.076723       1 server_others.go:168] "Using iptables Proxier"
	I0415 19:25:15.081239       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0415 19:25:15.082383       1 server.go:865] "Version info" version="v1.29.3"
	I0415 19:25:15.082420       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0415 19:25:15.083884       1 config.go:188] "Starting service config controller"
	I0415 19:25:15.083932       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0415 19:25:15.084121       1 config.go:97] "Starting endpoint slice config controller"
	I0415 19:25:15.084201       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0415 19:25:15.087448       1 config.go:315] "Starting node config controller"
	I0415 19:25:15.087481       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0415 19:25:15.185348       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0415 19:25:15.185460       1 shared_informer.go:318] Caches are synced for service config
	I0415 19:25:15.188983       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [8d334a05315f] <==
	W0415 19:24:55.501678       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0415 19:24:55.501880       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0415 19:24:55.675925       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0415 19:24:55.676265       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0415 19:24:55.754252       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0415 19:24:55.754425       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0415 19:24:55.847516       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0415 19:24:55.847572       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0415 19:24:55.851092       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0415 19:24:55.851140       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0415 19:24:55.861466       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0415 19:24:55.861820       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0415 19:24:55.954178       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0415 19:24:55.954371       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0415 19:24:55.959844       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0415 19:24:55.960089       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0415 19:24:56.041986       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0415 19:24:56.042536       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0415 19:24:56.071137       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0415 19:24:56.071929       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0415 19:24:56.110763       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0415 19:24:56.111230       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0415 19:24:56.172830       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0415 19:24:56.173223       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0415 19:24:57.859636       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 15 19:42:59 multinode-841000 kubelet[2137]: E0415 19:42:59.183322    2137 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 19:42:59 multinode-841000 kubelet[2137]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 19:42:59 multinode-841000 kubelet[2137]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 19:42:59 multinode-841000 kubelet[2137]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 19:42:59 multinode-841000 kubelet[2137]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 19:43:59 multinode-841000 kubelet[2137]: E0415 19:43:59.182456    2137 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 19:43:59 multinode-841000 kubelet[2137]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 19:43:59 multinode-841000 kubelet[2137]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 19:43:59 multinode-841000 kubelet[2137]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 19:43:59 multinode-841000 kubelet[2137]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 19:44:59 multinode-841000 kubelet[2137]: E0415 19:44:59.182780    2137 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 19:44:59 multinode-841000 kubelet[2137]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 19:44:59 multinode-841000 kubelet[2137]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 19:44:59 multinode-841000 kubelet[2137]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 19:44:59 multinode-841000 kubelet[2137]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 19:45:59 multinode-841000 kubelet[2137]: E0415 19:45:59.182123    2137 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 19:45:59 multinode-841000 kubelet[2137]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 19:45:59 multinode-841000 kubelet[2137]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 19:45:59 multinode-841000 kubelet[2137]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 19:45:59 multinode-841000 kubelet[2137]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 19:46:59 multinode-841000 kubelet[2137]: E0415 19:46:59.182232    2137 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 19:46:59 multinode-841000 kubelet[2137]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 19:46:59 multinode-841000 kubelet[2137]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 19:46:59 multinode-841000 kubelet[2137]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 19:46:59 multinode-841000 kubelet[2137]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 19:47:07.148437    5624 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-841000 -n multinode-841000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-841000 -n multinode-841000: (13.0585395s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-841000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StartAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StartAfterStop (298.25s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (261.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-841000
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-841000
E0415 19:48:16.833551   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-841000: (2m26.1598703s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-841000 --wait=true -v=8 --alsologtostderr
E0415 19:50:10.548561   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-841000 --wait=true -v=8 --alsologtostderr: exit status 1 (1m42.2414117s)

                                                
                                                
-- stdout --
	* [multinode-841000] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-841000" primary control-plane node in "multinode-841000" cluster
	* Restarting existing hyperv VM for "multinode-841000" ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 19:49:57.804083    6076 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0415 19:49:57.889043    6076 out.go:291] Setting OutFile to fd 512 ...
	I0415 19:49:57.889691    6076 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 19:49:57.889691    6076 out.go:304] Setting ErrFile to fd 828...
	I0415 19:49:57.889691    6076 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 19:49:57.912414    6076 out.go:298] Setting JSON to false
	I0415 19:49:57.915536    6076 start.go:129] hostinfo: {"hostname":"minikube6","uptime":22324,"bootTime":1713188273,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0415 19:49:57.915536    6076 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 19:49:57.996580    6076 out.go:177] * [multinode-841000] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0415 19:49:58.125895    6076 notify.go:220] Checking for updates...
	I0415 19:49:58.172497    6076 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 19:49:58.343592    6076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 19:49:58.516681    6076 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0415 19:49:58.709086    6076 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 19:49:58.874383    6076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 19:49:59.156325    6076 config.go:182] Loaded profile config "multinode-841000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 19:49:59.156541    6076 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 19:50:05.001900    6076 out.go:177] * Using the hyperv driver based on existing profile
	I0415 19:50:05.102565    6076 start.go:297] selected driver: hyperv
	I0415 19:50:05.103075    6076 start.go:901] validating driver "hyperv" against &{Name:multinode-841000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.29.3 ClusterName:multinode-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.62.237 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.55.167 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.52.34 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 19:50:05.103138    6076 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 19:50:05.162825    6076 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 19:50:05.163435    6076 cni.go:84] Creating CNI manager for ""
	I0415 19:50:05.163435    6076 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0415 19:50:05.163712    6076 start.go:340] cluster config:
	{Name:multinode-841000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-841000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.62.237 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.55.167 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.52.34 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:
false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 19:50:05.164310    6076 iso.go:125] acquiring lock: {Name:mkb11aac800c033551a31c7a773c0461f92e4459 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 19:50:05.208737    6076 out.go:177] * Starting "multinode-841000" primary control-plane node in "multinode-841000" cluster
	I0415 19:50:05.214024    6076 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 19:50:05.214788    6076 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 19:50:05.214877    6076 cache.go:56] Caching tarball of preloaded images
	I0415 19:50:05.215343    6076 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 19:50:05.215494    6076 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 19:50:05.215721    6076 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\config.json ...
	I0415 19:50:05.218507    6076 start.go:360] acquireMachinesLock for multinode-841000: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 19:50:05.218507    6076 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-841000"
	I0415 19:50:05.219059    6076 start.go:96] Skipping create...Using existing machine configuration
	I0415 19:50:05.219163    6076 fix.go:54] fixHost starting: 
	I0415 19:50:05.220134    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:50:08.203642    6076 main.go:141] libmachine: [stdout =====>] : Off
	
	I0415 19:50:08.203642    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:50:08.203761    6076 fix.go:112] recreateIfNeeded on multinode-841000: state=Stopped err=<nil>
	W0415 19:50:08.203761    6076 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 19:50:08.207102    6076 out.go:177] * Restarting existing hyperv VM for "multinode-841000" ...
	I0415 19:50:08.210630    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-841000
	I0415 19:50:11.480907    6076 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:50:11.480907    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:50:11.480907    6076 main.go:141] libmachine: Waiting for host to start...
	I0415 19:50:11.481576    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:50:13.876632    6076 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:50:13.876632    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:50:13.877626    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:50:16.542090    6076 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:50:16.542090    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:50:17.554211    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:50:19.918865    6076 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:50:19.918865    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:50:19.919584    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:50:22.698246    6076 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:50:22.698246    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:50:23.707332    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:50:26.080074    6076 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:50:26.080074    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:50:26.080517    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:50:28.771988    6076 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:50:28.771988    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:50:29.783014    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:50:32.119254    6076 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:50:32.119505    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:50:32.119751    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:50:34.795461    6076 main.go:141] libmachine: [stdout =====>] : 
	I0415 19:50:34.795461    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:50:35.799900    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:50:38.158488    6076 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:50:38.158488    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:50:38.158488    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:50:40.928112    6076 main.go:141] libmachine: [stdout =====>] : 172.19.62.145
	
	I0415 19:50:40.928112    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:50:40.931256    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:50:43.218869    6076 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:50:43.219854    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:50:43.219965    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:50:45.970361    6076 main.go:141] libmachine: [stdout =====>] : 172.19.62.145
	
	I0415 19:50:45.970484    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:50:45.970484    6076 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-841000\config.json ...
	I0415 19:50:45.973766    6076 machine.go:94] provisionDockerMachine start ...
	I0415 19:50:45.973766    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:50:48.326636    6076 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:50:48.327681    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:50:48.327681    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:50:51.033682    6076 main.go:141] libmachine: [stdout =====>] : 172.19.62.145
	
	I0415 19:50:51.034246    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:50:51.040313    6076 main.go:141] libmachine: Using SSH client type: native
	I0415 19:50:51.041115    6076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.62.145 22 <nil> <nil>}
	I0415 19:50:51.041115    6076 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 19:50:51.165493    6076 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0415 19:50:51.165493    6076 buildroot.go:166] provisioning hostname "multinode-841000"
	I0415 19:50:51.165605    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:50:53.481804    6076 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:50:53.482200    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:50:53.482200    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:50:56.249374    6076 main.go:141] libmachine: [stdout =====>] : 172.19.62.145
	
	I0415 19:50:56.250190    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:50:56.257500    6076 main.go:141] libmachine: Using SSH client type: native
	I0415 19:50:56.258208    6076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.62.145 22 <nil> <nil>}
	I0415 19:50:56.258208    6076 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-841000 && echo "multinode-841000" | sudo tee /etc/hostname
	I0415 19:50:56.413751    6076 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-841000
	
	I0415 19:50:56.413884    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:50:58.716983    6076 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:50:58.716983    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:50:58.717056    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:51:01.479563    6076 main.go:141] libmachine: [stdout =====>] : 172.19.62.145
	
	I0415 19:51:01.479563    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:51:01.487681    6076 main.go:141] libmachine: Using SSH client type: native
	I0415 19:51:01.487681    6076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.62.145 22 <nil> <nil>}
	I0415 19:51:01.487681    6076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-841000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-841000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-841000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 19:51:01.627038    6076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 19:51:01.627038    6076 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0415 19:51:01.627038    6076 buildroot.go:174] setting up certificates
	I0415 19:51:01.627038    6076 provision.go:84] configureAuth start
	I0415 19:51:01.627038    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:51:03.959100    6076 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:51:03.959100    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:51:03.959622    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:51:06.674669    6076 main.go:141] libmachine: [stdout =====>] : 172.19.62.145
	
	I0415 19:51:06.675770    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:51:06.675770    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:51:08.978319    6076 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:51:08.978319    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:51:08.978648    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:51:11.759703    6076 main.go:141] libmachine: [stdout =====>] : 172.19.62.145
	
	I0415 19:51:11.759703    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:51:11.759703    6076 provision.go:143] copyHostCerts
	I0415 19:51:11.760114    6076 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0415 19:51:11.760288    6076 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0415 19:51:11.760443    6076 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0415 19:51:11.760944    6076 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0415 19:51:11.762116    6076 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0415 19:51:11.762375    6076 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0415 19:51:11.762375    6076 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0415 19:51:11.762375    6076 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0415 19:51:11.763726    6076 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0415 19:51:11.763726    6076 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0415 19:51:11.763726    6076 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0415 19:51:11.764410    6076 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0415 19:51:11.765227    6076 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-841000 san=[127.0.0.1 172.19.62.145 localhost minikube multinode-841000]
	I0415 19:51:12.022525    6076 provision.go:177] copyRemoteCerts
	I0415 19:51:12.036551    6076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 19:51:12.036551    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:51:14.326083    6076 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:51:14.326083    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:51:14.326190    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:51:16.985366    6076 main.go:141] libmachine: [stdout =====>] : 172.19.62.145
	
	I0415 19:51:16.986022    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:51:16.986720    6076 sshutil.go:53] new ssh client: &{IP:172.19.62.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000\id_rsa Username:docker}
	I0415 19:51:17.094968    6076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0583767s)
	I0415 19:51:17.094968    6076 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0415 19:51:17.095615    6076 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 19:51:17.147548    6076 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0415 19:51:17.147548    6076 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0415 19:51:17.197996    6076 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0415 19:51:17.198558    6076 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0415 19:51:17.250527    6076 provision.go:87] duration metric: took 15.6233648s to configureAuth
	I0415 19:51:17.250527    6076 buildroot.go:189] setting minikube options for container-runtime
	I0415 19:51:17.251476    6076 config.go:182] Loaded profile config "multinode-841000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 19:51:17.251476    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:51:19.533912    6076 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:51:19.533912    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:51:19.533912    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:51:22.278776    6076 main.go:141] libmachine: [stdout =====>] : 172.19.62.145
	
	I0415 19:51:22.279372    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:51:22.285685    6076 main.go:141] libmachine: Using SSH client type: native
	I0415 19:51:22.286312    6076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.62.145 22 <nil> <nil>}
	I0415 19:51:22.286312    6076 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 19:51:22.425846    6076 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0415 19:51:22.425846    6076 buildroot.go:70] root file system type: tmpfs
	I0415 19:51:22.426127    6076 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 19:51:22.426229    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:51:24.760034    6076 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:51:24.760034    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:51:24.760989    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:51:27.525032    6076 main.go:141] libmachine: [stdout =====>] : 172.19.62.145
	
	I0415 19:51:27.525032    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:51:27.532778    6076 main.go:141] libmachine: Using SSH client type: native
	I0415 19:51:27.533311    6076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.62.145 22 <nil> <nil>}
	I0415 19:51:27.533471    6076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 19:51:27.691722    6076 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 19:51:27.692258    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:51:30.004645    6076 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:51:30.005248    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:51:30.005248    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:51:32.763318    6076 main.go:141] libmachine: [stdout =====>] : 172.19.62.145
	
	I0415 19:51:32.763318    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:51:32.774097    6076 main.go:141] libmachine: Using SSH client type: native
	I0415 19:51:32.774639    6076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.62.145 22 <nil> <nil>}
	I0415 19:51:32.774713    6076 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 19:51:35.395690    6076 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0415 19:51:35.395690    6076 machine.go:97] duration metric: took 49.4215327s to provisionDockerMachine
	I0415 19:51:35.395690    6076 start.go:293] postStartSetup for "multinode-841000" (driver="hyperv")
	I0415 19:51:35.395690    6076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 19:51:35.410515    6076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 19:51:35.410515    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:51:37.720617    6076 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:51:37.721596    6076 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:51:37.721596    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-841000" : exit status 1
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-841000
multinode_test.go:331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node list -p multinode-841000: context deadline exceeded (557.9µs)
multinode_test.go:333: failed to run node list. args "out/minikube-windows-amd64.exe node list -p multinode-841000" : context deadline exceeded
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-841000	172.19.62.237
multinode-841000-m02	172.19.55.167
multinode-841000-m03	172.19.52.34

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-841000 -n multinode-841000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-841000 -n multinode-841000: exit status 6 (13.0334048s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 19:51:40.066678   10008 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0415 19:51:52.886925   10008 status.go:417] kubeconfig endpoint: get endpoint: "multinode-841000" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-841000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (261.73s)

                                                
                                    
x
+
TestKubernetesUpgrade (1067.91s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-982500 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-982500 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv: (8m33.31789s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-982500
E0415 20:21:53.602640   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-982500: (36.9229096s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-982500 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-982500 status --format={{.Host}}: exit status 7 (2.6389837s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 20:22:21.200171    8832 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-982500 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-982500 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=hyperv: exit status 90 (7m18.854359s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-982500] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "kubernetes-upgrade-982500" primary control-plane node in "kubernetes-upgrade-982500" cluster
	* Restarting existing hyperv VM for "kubernetes-upgrade-982500" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 20:22:23.844705    7316 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0415 20:22:23.928435    7316 out.go:291] Setting OutFile to fd 1980 ...
	I0415 20:22:23.930110    7316 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 20:22:23.930110    7316 out.go:304] Setting ErrFile to fd 1240...
	I0415 20:22:23.930110    7316 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 20:22:23.956017    7316 out.go:298] Setting JSON to false
	I0415 20:22:23.960975    7316 start.go:129] hostinfo: {"hostname":"minikube6","uptime":24270,"bootTime":1713188273,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0415 20:22:23.960975    7316 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 20:22:24.031838    7316 out.go:177] * [kubernetes-upgrade-982500] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0415 20:22:24.073292    7316 notify.go:220] Checking for updates...
	I0415 20:22:24.163500    7316 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 20:22:24.276878    7316 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 20:22:24.467111    7316 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0415 20:22:24.567048    7316 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 20:22:24.781068    7316 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 20:22:24.844855    7316 config.go:182] Loaded profile config "kubernetes-upgrade-982500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0415 20:22:24.845801    7316 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 20:22:30.710707    7316 out.go:177] * Using the hyperv driver based on existing profile
	I0415 20:22:30.713802    7316 start.go:297] selected driver: hyperv
	I0415 20:22:30.713802    7316 start.go:901] validating driver "hyperv" against &{Name:kubernetes-upgrade-982500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-982500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.52.203 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 20:22:30.714807    7316 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 20:22:30.767761    7316 cni.go:84] Creating CNI manager for ""
	I0415 20:22:30.767839    7316 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 20:22:30.767965    7316 start.go:340] cluster config:
	{Name:kubernetes-upgrade-982500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:kubernetes-upgrade-982500 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.52.203 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 20:22:30.768206    7316 iso.go:125] acquiring lock: {Name:mkb11aac800c033551a31c7a773c0461f92e4459 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 20:22:30.771985    7316 out.go:177] * Starting "kubernetes-upgrade-982500" primary control-plane node in "kubernetes-upgrade-982500" cluster
	I0415 20:22:30.775456    7316 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime docker
	I0415 20:22:30.775456    7316 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0415 20:22:30.775456    7316 cache.go:56] Caching tarball of preloaded images
	I0415 20:22:30.775992    7316 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 20:22:30.776163    7316 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.2 on docker
	I0415 20:22:30.776163    7316 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\kubernetes-upgrade-982500\config.json ...
	I0415 20:22:30.778858    7316 start.go:360] acquireMachinesLock for kubernetes-upgrade-982500: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 20:26:37.592610    7316 start.go:364] duration metric: took 4m6.8116749s to acquireMachinesLock for "kubernetes-upgrade-982500"
	I0415 20:26:37.592903    7316 start.go:96] Skipping create...Using existing machine configuration
	I0415 20:26:37.592998    7316 fix.go:54] fixHost starting: 
	I0415 20:26:37.593736    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-982500 ).state
	I0415 20:26:39.935470    7316 main.go:141] libmachine: [stdout =====>] : Off
	
	I0415 20:26:39.935581    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:26:39.935581    7316 fix.go:112] recreateIfNeeded on kubernetes-upgrade-982500: state=Stopped err=<nil>
	W0415 20:26:39.935667    7316 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 20:26:39.947180    7316 out.go:177] * Restarting existing hyperv VM for "kubernetes-upgrade-982500" ...
	I0415 20:26:39.949996    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM kubernetes-upgrade-982500
	I0415 20:26:43.510252    7316 main.go:141] libmachine: [stdout =====>] : 
	I0415 20:26:43.510252    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:26:43.510252    7316 main.go:141] libmachine: Waiting for host to start...
	I0415 20:26:43.510252    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-982500 ).state
	I0415 20:26:46.134946    7316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:26:46.135494    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:26:46.135618    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-982500 ).networkadapters[0]).ipaddresses[0]
	I0415 20:26:48.984646    7316 main.go:141] libmachine: [stdout =====>] : 
	I0415 20:26:48.984788    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:26:49.999993    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-982500 ).state
	I0415 20:26:52.389098    7316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:26:52.389098    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:26:52.389425    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-982500 ).networkadapters[0]).ipaddresses[0]
	I0415 20:26:55.142047    7316 main.go:141] libmachine: [stdout =====>] : 
	I0415 20:26:55.142289    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:26:56.152114    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-982500 ).state
	I0415 20:26:58.660110    7316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:26:58.660110    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:26:58.660110    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-982500 ).networkadapters[0]).ipaddresses[0]
	I0415 20:27:01.447340    7316 main.go:141] libmachine: [stdout =====>] : 
	I0415 20:27:01.447754    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:27:02.458615    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-982500 ).state
	I0415 20:27:04.854175    7316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:27:04.854275    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:27:04.854408    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-982500 ).networkadapters[0]).ipaddresses[0]
	I0415 20:27:07.647970    7316 main.go:141] libmachine: [stdout =====>] : 
	I0415 20:27:07.648753    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:27:08.660202    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-982500 ).state
	I0415 20:27:11.091774    7316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:27:11.092330    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:27:11.092479    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-982500 ).networkadapters[0]).ipaddresses[0]
	I0415 20:27:13.915260    7316 main.go:141] libmachine: [stdout =====>] : 172.19.55.157
	
	I0415 20:27:13.915260    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:27:13.918942    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-982500 ).state
	I0415 20:27:16.291173    7316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:27:16.291173    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:27:16.291173    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-982500 ).networkadapters[0]).ipaddresses[0]
	I0415 20:27:19.091715    7316 main.go:141] libmachine: [stdout =====>] : 172.19.55.157
	
	I0415 20:27:19.091715    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:27:19.091850    7316 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\kubernetes-upgrade-982500\config.json ...
	I0415 20:27:19.094952    7316 machine.go:94] provisionDockerMachine start ...
	I0415 20:27:19.095199    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-982500 ).state
	I0415 20:27:21.680077    7316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:27:21.680433    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:27:21.680577    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-982500 ).networkadapters[0]).ipaddresses[0]
	I0415 20:27:24.731282    7316 main.go:141] libmachine: [stdout =====>] : 172.19.55.157
	
	I0415 20:27:24.731282    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:27:24.741286    7316 main.go:141] libmachine: Using SSH client type: native
	I0415 20:27:24.742447    7316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.55.157 22 <nil> <nil>}
	I0415 20:27:24.742447    7316 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 20:27:24.883978    7316 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0415 20:27:24.884319    7316 buildroot.go:166] provisioning hostname "kubernetes-upgrade-982500"
	I0415 20:27:24.884388    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-982500 ).state
	I0415 20:27:27.235258    7316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:27:27.235258    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:27:27.235258    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-982500 ).networkadapters[0]).ipaddresses[0]
	I0415 20:27:30.106932    7316 main.go:141] libmachine: [stdout =====>] : 172.19.55.157
	
	I0415 20:27:30.106932    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:27:30.116861    7316 main.go:141] libmachine: Using SSH client type: native
	I0415 20:27:30.117774    7316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.55.157 22 <nil> <nil>}
	I0415 20:27:30.117774    7316 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-982500 && echo "kubernetes-upgrade-982500" | sudo tee /etc/hostname
	I0415 20:27:30.283034    7316 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-982500
	
	I0415 20:27:30.283034    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-982500 ).state
	I0415 20:27:32.679538    7316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:27:32.679538    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:27:32.679614    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-982500 ).networkadapters[0]).ipaddresses[0]
	I0415 20:27:35.483627    7316 main.go:141] libmachine: [stdout =====>] : 172.19.55.157
	
	I0415 20:27:35.483685    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:27:35.490427    7316 main.go:141] libmachine: Using SSH client type: native
	I0415 20:27:35.491093    7316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.55.157 22 <nil> <nil>}
	I0415 20:27:35.491128    7316 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-982500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-982500/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-982500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 20:27:35.642766    7316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 20:27:35.642860    7316 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0415 20:27:35.642860    7316 buildroot.go:174] setting up certificates
	I0415 20:27:35.642860    7316 provision.go:84] configureAuth start
	I0415 20:27:35.642860    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-982500 ).state
	I0415 20:27:38.127419    7316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:27:38.127419    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:27:38.127718    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-982500 ).networkadapters[0]).ipaddresses[0]
	I0415 20:27:41.079282    7316 main.go:141] libmachine: [stdout =====>] : 172.19.55.157
	
	I0415 20:27:41.080295    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:27:41.080295    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-982500 ).state
	I0415 20:27:43.459243    7316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:27:43.459243    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:27:43.459730    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-982500 ).networkadapters[0]).ipaddresses[0]
	I0415 20:27:46.268389    7316 main.go:141] libmachine: [stdout =====>] : 172.19.55.157
	
	I0415 20:27:46.268944    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:27:46.268944    7316 provision.go:143] copyHostCerts
	I0415 20:27:46.269192    7316 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0415 20:27:46.269192    7316 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0415 20:27:46.269865    7316 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0415 20:27:46.271171    7316 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0415 20:27:46.271171    7316 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0415 20:27:46.271171    7316 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0415 20:27:46.272591    7316 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0415 20:27:46.272761    7316 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0415 20:27:46.273143    7316 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0415 20:27:46.274169    7316 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-982500 san=[127.0.0.1 172.19.55.157 kubernetes-upgrade-982500 localhost minikube]
	I0415 20:27:46.423972    7316 provision.go:177] copyRemoteCerts
	I0415 20:27:46.436967    7316 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 20:27:46.436967    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-982500 ).state
	I0415 20:27:48.802598    7316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:27:48.803467    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:27:48.803528    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-982500 ).networkadapters[0]).ipaddresses[0]
	I0415 20:27:51.637266    7316 main.go:141] libmachine: [stdout =====>] : 172.19.55.157
	
	I0415 20:27:51.637852    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:27:51.638119    7316 sshutil.go:53] new ssh client: &{IP:172.19.55.157 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-982500\id_rsa Username:docker}
	I0415 20:27:51.760236    7316 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.3232282s)
	I0415 20:27:51.760236    7316 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 20:27:51.817700    7316 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0415 20:27:51.869324    7316 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0415 20:27:51.918499    7316 provision.go:87] duration metric: took 16.2755118s to configureAuth
	I0415 20:27:51.918499    7316 buildroot.go:189] setting minikube options for container-runtime
	I0415 20:27:51.919311    7316 config.go:182] Loaded profile config "kubernetes-upgrade-982500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0-rc.2
	I0415 20:27:51.919311    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-982500 ).state
	I0415 20:27:54.302644    7316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:27:54.302644    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:27:54.302933    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-982500 ).networkadapters[0]).ipaddresses[0]
	I0415 20:27:57.191537    7316 main.go:141] libmachine: [stdout =====>] : 172.19.55.157
	
	I0415 20:27:57.192317    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:27:57.209732    7316 main.go:141] libmachine: Using SSH client type: native
	I0415 20:27:57.210343    7316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.55.157 22 <nil> <nil>}
	I0415 20:27:57.210409    7316 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 20:27:57.352429    7316 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0415 20:27:57.352548    7316 buildroot.go:70] root file system type: tmpfs
	I0415 20:27:57.352753    7316 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 20:27:57.352817    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-982500 ).state
	I0415 20:27:59.732838    7316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:27:59.733020    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:27:59.733020    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-982500 ).networkadapters[0]).ipaddresses[0]
	I0415 20:28:02.583741    7316 main.go:141] libmachine: [stdout =====>] : 172.19.55.157
	
	I0415 20:28:02.583741    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:28:02.590579    7316 main.go:141] libmachine: Using SSH client type: native
	I0415 20:28:02.591982    7316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.55.157 22 <nil> <nil>}
	I0415 20:28:02.591982    7316 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 20:28:02.762485    7316 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 20:28:02.762485    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-982500 ).state
	I0415 20:28:05.106466    7316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:28:05.106466    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:28:05.106938    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-982500 ).networkadapters[0]).ipaddresses[0]
	I0415 20:28:07.941638    7316 main.go:141] libmachine: [stdout =====>] : 172.19.55.157
	
	I0415 20:28:07.941638    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:28:07.948477    7316 main.go:141] libmachine: Using SSH client type: native
	I0415 20:28:07.949129    7316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.55.157 22 <nil> <nil>}
	I0415 20:28:07.949129    7316 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 20:28:10.540605    7316 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0415 20:28:10.540605    7316 machine.go:97] duration metric: took 51.4450995s to provisionDockerMachine
	I0415 20:28:10.540605    7316 start.go:293] postStartSetup for "kubernetes-upgrade-982500" (driver="hyperv")
	I0415 20:28:10.540605    7316 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 20:28:10.555960    7316 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 20:28:10.555960    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-982500 ).state
	I0415 20:28:12.882096    7316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:28:12.882755    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:28:12.882755    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-982500 ).networkadapters[0]).ipaddresses[0]
	I0415 20:28:15.690869    7316 main.go:141] libmachine: [stdout =====>] : 172.19.55.157
	
	I0415 20:28:15.691168    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:28:15.691662    7316 sshutil.go:53] new ssh client: &{IP:172.19.55.157 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-982500\id_rsa Username:docker}
	I0415 20:28:15.798738    7316 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.2427371s)
	I0415 20:28:15.813483    7316 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 20:28:15.821369    7316 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 20:28:15.821369    7316 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0415 20:28:15.822024    7316 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0415 20:28:15.822799    7316 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> 112722.pem in /etc/ssl/certs
	I0415 20:28:15.841400    7316 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 20:28:15.865673    7316 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /etc/ssl/certs/112722.pem (1708 bytes)
	I0415 20:28:15.922674    7316 start.go:296] duration metric: took 5.3820274s for postStartSetup
	I0415 20:28:15.922674    7316 fix.go:56] duration metric: took 1m38.3290041s for fixHost
	I0415 20:28:15.922674    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-982500 ).state
	I0415 20:28:18.271957    7316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:28:18.272988    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:28:18.272988    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-982500 ).networkadapters[0]).ipaddresses[0]
	I0415 20:28:21.302152    7316 main.go:141] libmachine: [stdout =====>] : 172.19.55.157
	
	I0415 20:28:21.302152    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:28:21.312086    7316 main.go:141] libmachine: Using SSH client type: native
	I0415 20:28:21.312919    7316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.55.157 22 <nil> <nil>}
	I0415 20:28:21.312919    7316 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0415 20:28:21.454200    7316 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713212901.461331310
	
	I0415 20:28:21.454200    7316 fix.go:216] guest clock: 1713212901.461331310
	I0415 20:28:21.454200    7316 fix.go:229] Guest: 2024-04-15 20:28:21.46133131 +0000 UTC Remote: 2024-04-15 20:28:15.9226746 +0000 UTC m=+352.186792201 (delta=5.53865671s)
	I0415 20:28:21.454200    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-982500 ).state
	I0415 20:28:23.991414    7316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:28:23.992150    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:28:23.992240    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-982500 ).networkadapters[0]).ipaddresses[0]
	I0415 20:28:26.956599    7316 main.go:141] libmachine: [stdout =====>] : 172.19.55.157
	
	I0415 20:28:26.957173    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:28:26.964430    7316 main.go:141] libmachine: Using SSH client type: native
	I0415 20:28:26.965290    7316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.55.157 22 <nil> <nil>}
	I0415 20:28:26.965290    7316 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713212901
	I0415 20:28:27.106184    7316 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 15 20:28:21 UTC 2024
	
	I0415 20:28:27.106303    7316 fix.go:236] clock set: Mon Apr 15 20:28:21 UTC 2024
	 (err=<nil>)
	I0415 20:28:27.106303    7316 start.go:83] releasing machines lock for "kubernetes-upgrade-982500", held for 1m49.5127038s
	I0415 20:28:27.106632    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-982500 ).state
	I0415 20:28:29.639342    7316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:28:29.639342    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:28:29.639879    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-982500 ).networkadapters[0]).ipaddresses[0]
	I0415 20:28:32.677030    7316 main.go:141] libmachine: [stdout =====>] : 172.19.55.157
	
	I0415 20:28:32.677030    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:28:32.681195    7316 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 20:28:32.681741    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-982500 ).state
	I0415 20:28:32.696675    7316 ssh_runner.go:195] Run: cat /version.json
	I0415 20:28:32.696675    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-982500 ).state
	I0415 20:28:35.590965    7316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:28:35.591973    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:28:35.591973    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-982500 ).networkadapters[0]).ipaddresses[0]
	I0415 20:28:35.678973    7316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:28:35.678973    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:28:35.678973    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-982500 ).networkadapters[0]).ipaddresses[0]
	I0415 20:28:38.859698    7316 main.go:141] libmachine: [stdout =====>] : 172.19.55.157
	
	I0415 20:28:38.859766    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:28:38.859766    7316 sshutil.go:53] new ssh client: &{IP:172.19.55.157 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-982500\id_rsa Username:docker}
	I0415 20:28:38.940783    7316 main.go:141] libmachine: [stdout =====>] : 172.19.55.157
	
	I0415 20:28:38.941681    7316 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:28:38.942919    7316 sshutil.go:53] new ssh client: &{IP:172.19.55.157 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-982500\id_rsa Username:docker}
	I0415 20:28:39.019549    7316 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (6.3381711s)
	I0415 20:28:39.042124    7316 ssh_runner.go:235] Completed: cat /version.json: (6.3452822s)
	I0415 20:28:39.059120    7316 ssh_runner.go:195] Run: systemctl --version
	I0415 20:28:39.089424    7316 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0415 20:28:39.099587    7316 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 20:28:39.114200    7316 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0415 20:28:39.149452    7316 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0415 20:28:39.183416    7316 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 20:28:39.183561    7316 start.go:494] detecting cgroup driver to use...
	I0415 20:28:39.184472    7316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 20:28:39.242415    7316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0415 20:28:39.279202    7316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 20:28:39.303236    7316 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 20:28:39.318437    7316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 20:28:39.361070    7316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 20:28:39.402987    7316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 20:28:39.440369    7316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 20:28:39.476931    7316 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 20:28:39.517679    7316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 20:28:39.561257    7316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 20:28:39.607951    7316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 20:28:39.653287    7316 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 20:28:39.693011    7316 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 20:28:39.727159    7316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 20:28:39.967384    7316 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 20:28:40.005852    7316 start.go:494] detecting cgroup driver to use...
	I0415 20:28:40.019731    7316 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0415 20:28:40.061954    7316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 20:28:40.101845    7316 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 20:28:40.154887    7316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 20:28:40.200517    7316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 20:28:40.244130    7316 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0415 20:28:40.320028    7316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 20:28:40.354254    7316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 20:28:40.410186    7316 ssh_runner.go:195] Run: which cri-dockerd
	I0415 20:28:40.433169    7316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0415 20:28:40.454981    7316 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0415 20:28:40.510518    7316 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0415 20:28:40.751559    7316 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0415 20:28:40.996016    7316 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0415 20:28:40.996422    7316 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0415 20:28:41.053205    7316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 20:28:41.292740    7316 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 20:29:42.466979    7316 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1737616s)
	I0415 20:29:42.481660    7316 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0415 20:29:42.518319    7316 out.go:177] 
	W0415 20:29:42.520862    7316 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 15 20:28:08 kubernetes-upgrade-982500 systemd[1]: Starting Docker Application Container Engine...
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[652]: time="2024-04-15T20:28:08.620465809Z" level=info msg="Starting up"
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[652]: time="2024-04-15T20:28:08.621661026Z" level=info msg="containerd not running, starting managed containerd"
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[652]: time="2024-04-15T20:28:08.623554453Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=659
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.662132599Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.692676531Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.692790933Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.693000536Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.693024636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.693836548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.693943849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.694197753Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.694355255Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.694462657Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.694498357Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.695116566Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.695920077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.699109422Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.699236024Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.699507128Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.699610530Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.700136737Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.700257039Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.700278039Z" level=info msg="metadata content store policy set" policy=shared
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.702914276Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.703398083Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.703467284Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.703551685Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.703571386Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.703652487Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704465398Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704616300Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704656701Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704702902Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704721502Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704737402Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704753702Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704770103Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704786703Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704801303Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704816503Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704833504Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704856904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704873604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704897104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704915705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704933705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704948805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704962805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704978206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704994606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.705014606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.705029706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.705044807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.705058407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.705077507Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.705100007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.705114508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.705135008Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.705453612Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.705572314Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.705591414Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.705604114Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.705683716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.705710916Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.705725616Z" level=info msg="NRI interface is disabled by configuration."
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.706004520Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.706207423Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.706454826Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.706558628Z" level=info msg="containerd successfully booted in 0.048857s"
	Apr 15 20:28:09 kubernetes-upgrade-982500 dockerd[652]: time="2024-04-15T20:28:09.688651136Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 15 20:28:09 kubernetes-upgrade-982500 dockerd[652]: time="2024-04-15T20:28:09.835452884Z" level=info msg="Loading containers: start."
	Apr 15 20:28:10 kubernetes-upgrade-982500 dockerd[652]: time="2024-04-15T20:28:10.335439664Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 15 20:28:10 kubernetes-upgrade-982500 dockerd[652]: time="2024-04-15T20:28:10.439399732Z" level=info msg="Loading containers: done."
	Apr 15 20:28:10 kubernetes-upgrade-982500 dockerd[652]: time="2024-04-15T20:28:10.476108315Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 15 20:28:10 kubernetes-upgrade-982500 dockerd[652]: time="2024-04-15T20:28:10.476703423Z" level=info msg="Daemon has completed initialization"
	Apr 15 20:28:10 kubernetes-upgrade-982500 dockerd[652]: time="2024-04-15T20:28:10.545528629Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 15 20:28:10 kubernetes-upgrade-982500 systemd[1]: Started Docker Application Container Engine.
	Apr 15 20:28:10 kubernetes-upgrade-982500 dockerd[652]: time="2024-04-15T20:28:10.548351066Z" level=info msg="API listen on [::]:2376"
	Apr 15 20:28:41 kubernetes-upgrade-982500 dockerd[652]: time="2024-04-15T20:28:41.329984670Z" level=info msg="Processing signal 'terminated'"
	Apr 15 20:28:41 kubernetes-upgrade-982500 dockerd[652]: time="2024-04-15T20:28:41.331293171Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 15 20:28:41 kubernetes-upgrade-982500 dockerd[652]: time="2024-04-15T20:28:41.332865372Z" level=info msg="Daemon shutdown complete"
	Apr 15 20:28:41 kubernetes-upgrade-982500 dockerd[652]: time="2024-04-15T20:28:41.332936372Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 15 20:28:41 kubernetes-upgrade-982500 dockerd[652]: time="2024-04-15T20:28:41.332981572Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 15 20:28:41 kubernetes-upgrade-982500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 15 20:28:42 kubernetes-upgrade-982500 systemd[1]: docker.service: Deactivated successfully.
	Apr 15 20:28:42 kubernetes-upgrade-982500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 15 20:28:42 kubernetes-upgrade-982500 systemd[1]: Starting Docker Application Container Engine...
	Apr 15 20:28:42 kubernetes-upgrade-982500 dockerd[1124]: time="2024-04-15T20:28:42.427392760Z" level=info msg="Starting up"
	Apr 15 20:29:42 kubernetes-upgrade-982500 dockerd[1124]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 15 20:29:42 kubernetes-upgrade-982500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 15 20:29:42 kubernetes-upgrade-982500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 15 20:29:42 kubernetes-upgrade-982500 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 15 20:28:08 kubernetes-upgrade-982500 systemd[1]: Starting Docker Application Container Engine...
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[652]: time="2024-04-15T20:28:08.620465809Z" level=info msg="Starting up"
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[652]: time="2024-04-15T20:28:08.621661026Z" level=info msg="containerd not running, starting managed containerd"
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[652]: time="2024-04-15T20:28:08.623554453Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=659
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.662132599Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.692676531Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.692790933Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.693000536Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.693024636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.693836548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.693943849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.694197753Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.694355255Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.694462657Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.694498357Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.695116566Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.695920077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.699109422Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.699236024Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.699507128Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.699610530Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.700136737Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.700257039Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.700278039Z" level=info msg="metadata content store policy set" policy=shared
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.702914276Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.703398083Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.703467284Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.703551685Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.703571386Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.703652487Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704465398Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704616300Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704656701Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704702902Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704721502Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704737402Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704753702Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704770103Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704786703Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704801303Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704816503Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704833504Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704856904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704873604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704897104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704915705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704933705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704948805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704962805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704978206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.704994606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.705014606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.705029706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.705044807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.705058407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.705077507Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.705100007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.705114508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.705135008Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.705453612Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.705572314Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.705591414Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.705604114Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.705683716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.705710916Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.705725616Z" level=info msg="NRI interface is disabled by configuration."
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.706004520Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.706207423Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.706454826Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 15 20:28:08 kubernetes-upgrade-982500 dockerd[659]: time="2024-04-15T20:28:08.706558628Z" level=info msg="containerd successfully booted in 0.048857s"
	Apr 15 20:28:09 kubernetes-upgrade-982500 dockerd[652]: time="2024-04-15T20:28:09.688651136Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 15 20:28:09 kubernetes-upgrade-982500 dockerd[652]: time="2024-04-15T20:28:09.835452884Z" level=info msg="Loading containers: start."
	Apr 15 20:28:10 kubernetes-upgrade-982500 dockerd[652]: time="2024-04-15T20:28:10.335439664Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 15 20:28:10 kubernetes-upgrade-982500 dockerd[652]: time="2024-04-15T20:28:10.439399732Z" level=info msg="Loading containers: done."
	Apr 15 20:28:10 kubernetes-upgrade-982500 dockerd[652]: time="2024-04-15T20:28:10.476108315Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Apr 15 20:28:10 kubernetes-upgrade-982500 dockerd[652]: time="2024-04-15T20:28:10.476703423Z" level=info msg="Daemon has completed initialization"
	Apr 15 20:28:10 kubernetes-upgrade-982500 dockerd[652]: time="2024-04-15T20:28:10.545528629Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 15 20:28:10 kubernetes-upgrade-982500 systemd[1]: Started Docker Application Container Engine.
	Apr 15 20:28:10 kubernetes-upgrade-982500 dockerd[652]: time="2024-04-15T20:28:10.548351066Z" level=info msg="API listen on [::]:2376"
	Apr 15 20:28:41 kubernetes-upgrade-982500 dockerd[652]: time="2024-04-15T20:28:41.329984670Z" level=info msg="Processing signal 'terminated'"
	Apr 15 20:28:41 kubernetes-upgrade-982500 dockerd[652]: time="2024-04-15T20:28:41.331293171Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 15 20:28:41 kubernetes-upgrade-982500 dockerd[652]: time="2024-04-15T20:28:41.332865372Z" level=info msg="Daemon shutdown complete"
	Apr 15 20:28:41 kubernetes-upgrade-982500 dockerd[652]: time="2024-04-15T20:28:41.332936372Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 15 20:28:41 kubernetes-upgrade-982500 dockerd[652]: time="2024-04-15T20:28:41.332981572Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 15 20:28:41 kubernetes-upgrade-982500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 15 20:28:42 kubernetes-upgrade-982500 systemd[1]: docker.service: Deactivated successfully.
	Apr 15 20:28:42 kubernetes-upgrade-982500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 15 20:28:42 kubernetes-upgrade-982500 systemd[1]: Starting Docker Application Container Engine...
	Apr 15 20:28:42 kubernetes-upgrade-982500 dockerd[1124]: time="2024-04-15T20:28:42.427392760Z" level=info msg="Starting up"
	Apr 15 20:29:42 kubernetes-upgrade-982500 dockerd[1124]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 15 20:29:42 kubernetes-upgrade-982500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 15 20:29:42 kubernetes-upgrade-982500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 15 20:29:42 kubernetes-upgrade-982500 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0415 20:29:42.521452    7316 out.go:239] * 
	* 
	W0415 20:29:42.523013    7316 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 20:29:42.527393    7316 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-982500 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=hyperv : exit status 90
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-982500 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-982500 version --output=json: exit status 1 (203.288ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-982500" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-04-15 20:29:42.950815 +0000 UTC m=+10244.848686901
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-982500 -n kubernetes-upgrade-982500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-982500 -n kubernetes-upgrade-982500: exit status 6 (13.8910773s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 20:29:43.074965    7628 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0415 20:29:56.767934    7628 status.go:417] kubeconfig endpoint: get endpoint: "kubernetes-upgrade-982500" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-982500" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-982500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-982500
E0415 20:30:10.569298   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-982500: (1m1.8979378s)
--- FAIL: TestKubernetesUpgrade (1067.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (299.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-993800 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-993800 --driver=hyperv: exit status 1 (4m59.6160648s)

                                                
                                                
-- stdout --
	* [NoKubernetes-993800] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-993800" primary control-plane node in "NoKubernetes-993800" cluster

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 20:08:10.391148    5916 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-993800 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-993800 -n NoKubernetes-993800
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-993800 -n NoKubernetes-993800: exit status 7 (261.521ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 20:13:09.969273    8632 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-993800" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (299.88s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (362.46s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-639400 --alsologtostderr -v=1 --driver=hyperv
pause_test.go:92: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p pause-639400 --alsologtostderr -v=1 --driver=hyperv: exit status 1 (5m26.0211714s)

                                                
                                                
-- stdout --
	* [pause-639400] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "pause-639400" primary control-plane node in "pause-639400" cluster
	* Updating the running hyperv "pause-639400" VM ...
	* Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	* Configuring bridge CNI (Container Networking Interface) ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 20:27:16.375769    6588 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0415 20:27:16.464457    6588 out.go:291] Setting OutFile to fd 1220 ...
	I0415 20:27:16.465338    6588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 20:27:16.465338    6588 out.go:304] Setting ErrFile to fd 1196...
	I0415 20:27:16.465338    6588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 20:27:16.492219    6588 out.go:298] Setting JSON to false
	I0415 20:27:16.496988    6588 start.go:129] hostinfo: {"hostname":"minikube6","uptime":24563,"bootTime":1713188273,"procs":201,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0415 20:27:16.497163    6588 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 20:27:16.502499    6588 out.go:177] * [pause-639400] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0415 20:27:16.508436    6588 notify.go:220] Checking for updates...
	I0415 20:27:16.513026    6588 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 20:27:16.515476    6588 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 20:27:16.518419    6588 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0415 20:27:16.521012    6588 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 20:27:16.523317    6588 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 20:27:16.526949    6588 config.go:182] Loaded profile config "pause-639400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 20:27:16.527924    6588 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 20:27:22.733020    6588 out.go:177] * Using the hyperv driver based on existing profile
	I0415 20:27:22.740026    6588 start.go:297] selected driver: hyperv
	I0415 20:27:22.740026    6588 start.go:901] validating driver "hyperv" against &{Name:pause-639400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.29.3 ClusterName:pause-639400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.51.119 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 20:27:22.741061    6588 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 20:27:22.803762    6588 cni.go:84] Creating CNI manager for ""
	I0415 20:27:22.803762    6588 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 20:27:22.803762    6588 start.go:340] cluster config:
	{Name:pause-639400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:pause-639400 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.51.119 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 20:27:22.803762    6588 iso.go:125] acquiring lock: {Name:mkb11aac800c033551a31c7a773c0461f92e4459 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 20:27:22.809756    6588 out.go:177] * Starting "pause-639400" primary control-plane node in "pause-639400" cluster
	I0415 20:27:22.816657    6588 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 20:27:22.816898    6588 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 20:27:22.816980    6588 cache.go:56] Caching tarball of preloaded images
	I0415 20:27:22.817141    6588 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 20:27:22.817141    6588 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 20:27:22.817778    6588 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-639400\config.json ...
	I0415 20:27:22.820737    6588 start.go:360] acquireMachinesLock for pause-639400: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 20:30:16.253261    6588 start.go:364] duration metric: took 2m53.4311708s to acquireMachinesLock for "pause-639400"
	I0415 20:30:16.253450    6588 start.go:96] Skipping create...Using existing machine configuration
	I0415 20:30:16.253450    6588 fix.go:54] fixHost starting: 
	I0415 20:30:16.254744    6588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-639400 ).state
	I0415 20:30:18.651277    6588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:30:18.651501    6588 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:30:18.651501    6588 fix.go:112] recreateIfNeeded on pause-639400: state=Running err=<nil>
	W0415 20:30:18.651659    6588 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 20:30:18.656980    6588 out.go:177] * Updating the running hyperv "pause-639400" VM ...
	I0415 20:30:18.659869    6588 machine.go:94] provisionDockerMachine start ...
	I0415 20:30:18.659932    6588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-639400 ).state
	I0415 20:30:21.059395    6588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:30:21.060093    6588 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:30:21.060093    6588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-639400 ).networkadapters[0]).ipaddresses[0]
	I0415 20:30:23.911591    6588 main.go:141] libmachine: [stdout =====>] : 172.19.51.119
	
	I0415 20:30:23.912000    6588 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:30:23.922341    6588 main.go:141] libmachine: Using SSH client type: native
	I0415 20:30:23.922706    6588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.51.119 22 <nil> <nil>}
	I0415 20:30:23.922706    6588 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 20:30:24.077537    6588 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-639400
	
	I0415 20:30:24.077660    6588 buildroot.go:166] provisioning hostname "pause-639400"
	I0415 20:30:24.077752    6588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-639400 ).state
	I0415 20:30:26.534313    6588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:30:26.534383    6588 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:30:26.534463    6588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-639400 ).networkadapters[0]).ipaddresses[0]
	I0415 20:30:29.310961    6588 main.go:141] libmachine: [stdout =====>] : 172.19.51.119
	
	I0415 20:30:29.311127    6588 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:30:29.317024    6588 main.go:141] libmachine: Using SSH client type: native
	I0415 20:30:29.317527    6588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.51.119 22 <nil> <nil>}
	I0415 20:30:29.317527    6588 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-639400 && echo "pause-639400" | sudo tee /etc/hostname
	I0415 20:30:29.494316    6588 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-639400
	
	I0415 20:30:29.494383    6588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-639400 ).state
	I0415 20:30:31.807131    6588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:30:31.807131    6588 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:30:31.807131    6588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-639400 ).networkadapters[0]).ipaddresses[0]
	I0415 20:30:34.813153    6588 main.go:141] libmachine: [stdout =====>] : 172.19.51.119
	
	I0415 20:30:34.813153    6588 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:30:34.820750    6588 main.go:141] libmachine: Using SSH client type: native
	I0415 20:30:34.820750    6588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.51.119 22 <nil> <nil>}
	I0415 20:30:34.821328    6588 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-639400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-639400/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-639400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 20:30:34.971026    6588 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 20:30:34.971090    6588 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0415 20:30:34.971090    6588 buildroot.go:174] setting up certificates
	I0415 20:30:34.971090    6588 provision.go:84] configureAuth start
	I0415 20:30:34.971187    6588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-639400 ).state
	I0415 20:30:37.271642    6588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:30:37.271642    6588 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:30:37.272377    6588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-639400 ).networkadapters[0]).ipaddresses[0]
	I0415 20:30:40.099106    6588 main.go:141] libmachine: [stdout =====>] : 172.19.51.119
	
	I0415 20:30:40.099106    6588 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:30:40.100117    6588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-639400 ).state
	I0415 20:30:42.461294    6588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:30:42.461294    6588 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:30:42.461294    6588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-639400 ).networkadapters[0]).ipaddresses[0]
	I0415 20:30:45.295883    6588 main.go:141] libmachine: [stdout =====>] : 172.19.51.119
	
	I0415 20:30:45.295883    6588 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:30:45.295883    6588 provision.go:143] copyHostCerts
	I0415 20:30:45.297477    6588 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0415 20:30:45.297477    6588 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0415 20:30:45.297983    6588 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0415 20:30:45.299258    6588 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0415 20:30:45.299381    6588 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0415 20:30:45.299824    6588 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0415 20:30:45.300538    6588 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0415 20:30:45.300538    6588 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0415 20:30:45.301244    6588 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0415 20:30:45.302164    6588 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.pause-639400 san=[127.0.0.1 172.19.51.119 localhost minikube pause-639400]
	I0415 20:30:45.421840    6588 provision.go:177] copyRemoteCerts
	I0415 20:30:45.436452    6588 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 20:30:45.436452    6588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-639400 ).state
	I0415 20:30:47.885733    6588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:30:47.885733    6588 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:30:47.885733    6588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-639400 ).networkadapters[0]).ipaddresses[0]
	I0415 20:30:50.943751    6588 main.go:141] libmachine: [stdout =====>] : 172.19.51.119
	
	I0415 20:30:50.943751    6588 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:30:50.944466    6588 sshutil.go:53] new ssh client: &{IP:172.19.51.119 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\pause-639400\id_rsa Username:docker}
	I0415 20:30:51.064621    6588 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.6281245s)
	I0415 20:30:51.065359    6588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 20:30:51.120202    6588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1204 bytes)
	I0415 20:30:51.176199    6588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0415 20:30:51.241982    6588 provision.go:87] duration metric: took 16.2707625s to configureAuth
	I0415 20:30:51.242106    6588 buildroot.go:189] setting minikube options for container-runtime
	I0415 20:30:51.242795    6588 config.go:182] Loaded profile config "pause-639400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 20:30:51.242879    6588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-639400 ).state
	I0415 20:30:53.589749    6588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:30:53.589749    6588 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:30:53.589749    6588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-639400 ).networkadapters[0]).ipaddresses[0]
	I0415 20:30:56.516619    6588 main.go:141] libmachine: [stdout =====>] : 172.19.51.119
	
	I0415 20:30:56.516698    6588 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:30:56.527456    6588 main.go:141] libmachine: Using SSH client type: native
	I0415 20:30:56.527456    6588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.51.119 22 <nil> <nil>}
	I0415 20:30:56.528374    6588 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 20:30:56.689974    6588 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0415 20:30:56.689974    6588 buildroot.go:70] root file system type: tmpfs
	I0415 20:30:56.689974    6588 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 20:30:56.690527    6588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-639400 ).state
	I0415 20:30:59.169642    6588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:30:59.169862    6588 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:30:59.169862    6588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-639400 ).networkadapters[0]).ipaddresses[0]
	I0415 20:31:02.103115    6588 main.go:141] libmachine: [stdout =====>] : 172.19.51.119
	
	I0415 20:31:02.103466    6588 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:31:02.110188    6588 main.go:141] libmachine: Using SSH client type: native
	I0415 20:31:02.110724    6588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.51.119 22 <nil> <nil>}
	I0415 20:31:02.110949    6588 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 20:31:02.292016    6588 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 20:31:02.292016    6588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-639400 ).state
	I0415 20:31:04.637231    6588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:31:04.637372    6588 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:31:04.637428    6588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-639400 ).networkadapters[0]).ipaddresses[0]
	I0415 20:31:07.451077    6588 main.go:141] libmachine: [stdout =====>] : 172.19.51.119
	
	I0415 20:31:07.451927    6588 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:31:07.457390    6588 main.go:141] libmachine: Using SSH client type: native
	I0415 20:31:07.457390    6588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.51.119 22 <nil> <nil>}
	I0415 20:31:07.457390    6588 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 20:31:07.610635    6588 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 20:31:07.610635    6588 machine.go:97] duration metric: took 48.9503792s to provisionDockerMachine
	I0415 20:31:07.610635    6588 start.go:293] postStartSetup for "pause-639400" (driver="hyperv")
	I0415 20:31:07.610635    6588 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 20:31:07.625994    6588 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 20:31:07.625994    6588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-639400 ).state
	I0415 20:31:09.968070    6588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:31:09.968070    6588 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:31:09.968070    6588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-639400 ).networkadapters[0]).ipaddresses[0]
	I0415 20:31:12.730080    6588 main.go:141] libmachine: [stdout =====>] : 172.19.51.119
	
	I0415 20:31:12.730080    6588 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:31:12.731063    6588 sshutil.go:53] new ssh client: &{IP:172.19.51.119 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\pause-639400\id_rsa Username:docker}
	I0415 20:31:12.846376    6588 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.2202762s)
	I0415 20:31:12.862142    6588 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 20:31:12.870275    6588 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 20:31:12.870275    6588 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0415 20:31:12.871113    6588 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0415 20:31:12.871948    6588 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem -> 112722.pem in /etc/ssl/certs
	I0415 20:31:12.891148    6588 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 20:31:12.914809    6588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /etc/ssl/certs/112722.pem (1708 bytes)
	I0415 20:31:12.966759    6588 start.go:296] duration metric: took 5.35598s for postStartSetup
	I0415 20:31:12.966903    6588 fix.go:56] duration metric: took 56.7128612s for fixHost
	I0415 20:31:12.966993    6588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-639400 ).state
	I0415 20:31:15.345509    6588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:31:15.345509    6588 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:31:15.345509    6588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-639400 ).networkadapters[0]).ipaddresses[0]
	I0415 20:31:18.341080    6588 main.go:141] libmachine: [stdout =====>] : 172.19.51.119
	
	I0415 20:31:18.341080    6588 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:31:18.346999    6588 main.go:141] libmachine: Using SSH client type: native
	I0415 20:31:18.347819    6588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.51.119 22 <nil> <nil>}
	I0415 20:31:18.347926    6588 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0415 20:31:18.500143    6588 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713213078.498609539
	
	I0415 20:31:18.500143    6588 fix.go:216] guest clock: 1713213078.498609539
	I0415 20:31:18.500143    6588 fix.go:229] Guest: 2024-04-15 20:31:18.498609539 +0000 UTC Remote: 2024-04-15 20:31:12.9669523 +0000 UTC m=+236.693940901 (delta=5.531657239s)
	I0415 20:31:18.500742    6588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-639400 ).state
	I0415 20:31:21.057484    6588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:31:21.057484    6588 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:31:21.057588    6588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-639400 ).networkadapters[0]).ipaddresses[0]
	I0415 20:31:24.014336    6588 main.go:141] libmachine: [stdout =====>] : 172.19.51.119
	
	I0415 20:31:24.014336    6588 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:31:24.021792    6588 main.go:141] libmachine: Using SSH client type: native
	I0415 20:31:24.022787    6588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.51.119 22 <nil> <nil>}
	I0415 20:31:24.022787    6588 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713213078
	I0415 20:31:24.185702    6588 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 15 20:31:18 UTC 2024
	
	I0415 20:31:24.185807    6588 fix.go:236] clock set: Mon Apr 15 20:31:18 UTC 2024
	 (err=<nil>)
	I0415 20:31:24.185807    6588 start.go:83] releasing machines lock for "pause-639400", held for 1m7.931895s
	I0415 20:31:24.186181    6588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-639400 ).state
	I0415 20:31:26.670231    6588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:31:26.670747    6588 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:31:26.670940    6588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-639400 ).networkadapters[0]).ipaddresses[0]
	I0415 20:31:29.683909    6588 main.go:141] libmachine: [stdout =====>] : 172.19.51.119
	
	I0415 20:31:29.683909    6588 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:31:29.690844    6588 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 20:31:29.691068    6588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-639400 ).state
	I0415 20:31:29.708505    6588 ssh_runner.go:195] Run: cat /version.json
	I0415 20:31:29.708505    6588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-639400 ).state
	I0415 20:31:32.374500    6588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:31:32.374500    6588 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:31:32.374676    6588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-639400 ).networkadapters[0]).ipaddresses[0]
	I0415 20:31:32.410981    6588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:31:32.410981    6588 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:31:32.410981    6588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-639400 ).networkadapters[0]).ipaddresses[0]
	I0415 20:31:35.454451    6588 main.go:141] libmachine: [stdout =====>] : 172.19.51.119
	
	I0415 20:31:35.454451    6588 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:31:35.455254    6588 sshutil.go:53] new ssh client: &{IP:172.19.51.119 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\pause-639400\id_rsa Username:docker}
	I0415 20:31:35.489960    6588 main.go:141] libmachine: [stdout =====>] : 172.19.51.119
	
	I0415 20:31:35.490521    6588 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:31:35.491088    6588 sshutil.go:53] new ssh client: &{IP:172.19.51.119 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\pause-639400\id_rsa Username:docker}
	I0415 20:31:37.574351    6588 ssh_runner.go:235] Completed: cat /version.json: (7.8656864s)
	I0415 20:31:37.574351    6588 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (7.8833471s)
	W0415 20:31:37.574695    6588 start.go:860] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2000 milliseconds
	W0415 20:31:37.574888    6588 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
	! This VM is having trouble accessing https://registry.k8s.io
	W0415 20:31:37.575002    6588 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0415 20:31:37.590462    6588 ssh_runner.go:195] Run: systemctl --version
	I0415 20:31:37.618743    6588 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0415 20:31:37.628840    6588 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 20:31:37.642833    6588 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 20:31:37.663861    6588 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0415 20:31:37.663861    6588 start.go:494] detecting cgroup driver to use...
	I0415 20:31:37.663861    6588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 20:31:37.723824    6588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0415 20:31:37.768414    6588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 20:31:37.795420    6588 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 20:31:37.808414    6588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 20:31:37.852068    6588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 20:31:37.891320    6588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 20:31:37.936693    6588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 20:31:37.977425    6588 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 20:31:38.018947    6588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 20:31:38.064216    6588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 20:31:38.104208    6588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 20:31:38.142365    6588 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 20:31:38.183209    6588 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 20:31:38.223249    6588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 20:31:38.544765    6588 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 20:31:38.586629    6588 start.go:494] detecting cgroup driver to use...
	I0415 20:31:38.608396    6588 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0415 20:31:38.657142    6588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 20:31:38.696483    6588 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 20:31:38.753143    6588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 20:31:38.804280    6588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 20:31:38.839384    6588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 20:31:38.905660    6588 ssh_runner.go:195] Run: which cri-dockerd
	I0415 20:31:38.931625    6588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0415 20:31:38.952626    6588 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0415 20:31:39.004621    6588 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0415 20:31:39.322726    6588 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0415 20:31:39.701949    6588 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0415 20:31:39.701949    6588 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0415 20:31:39.767704    6588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 20:31:40.075520    6588 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 20:31:53.155236    6588 ssh_runner.go:235] Completed: sudo systemctl restart docker: (13.079203s)
	I0415 20:31:53.177725    6588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0415 20:31:53.237103    6588 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0415 20:31:53.304471    6588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 20:31:53.345800    6588 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0415 20:31:53.595721    6588 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0415 20:31:53.852671    6588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 20:31:54.102605    6588 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0415 20:31:54.153837    6588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 20:31:54.191266    6588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 20:31:54.452494    6588 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0415 20:31:54.607680    6588 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0415 20:31:54.622062    6588 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0415 20:31:54.631905    6588 start.go:562] Will wait 60s for crictl version
	I0415 20:31:54.645874    6588 ssh_runner.go:195] Run: which crictl
	I0415 20:31:54.671229    6588 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0415 20:31:54.747465    6588 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0415 20:31:54.762703    6588 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 20:31:54.816481    6588 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 20:31:54.861487    6588 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0415 20:31:54.861487    6588 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0415 20:31:54.865534    6588 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0415 20:31:54.865534    6588 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0415 20:31:54.865534    6588 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0415 20:31:54.865534    6588 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:d7:0e Flags:up|broadcast|multicast|running}
	I0415 20:31:54.869488    6588 ip.go:210] interface addr: fe80::6b0:6318:bc6e:fcda/64
	I0415 20:31:54.869488    6588 ip.go:210] interface addr: 172.19.48.1/20
	I0415 20:31:54.885509    6588 ssh_runner.go:195] Run: grep 172.19.48.1	host.minikube.internal$ /etc/hosts
	I0415 20:31:54.893582    6588 kubeadm.go:877] updating cluster {Name:pause-639400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:pause-639400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.51.119 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin
:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0415 20:31:54.893582    6588 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 20:31:54.905460    6588 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 20:31:54.934756    6588 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0415 20:31:54.934839    6588 docker.go:615] Images already preloaded, skipping extraction
	I0415 20:31:54.946907    6588 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 20:31:54.984178    6588 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0415 20:31:54.984178    6588 cache_images.go:84] Images are preloaded, skipping loading
	I0415 20:31:54.984178    6588 kubeadm.go:928] updating node { 172.19.51.119 8443 v1.29.3 docker true true} ...
	I0415 20:31:54.985178    6588 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-639400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.51.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:pause-639400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0415 20:31:54.995189    6588 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0415 20:31:55.033196    6588 cni.go:84] Creating CNI manager for ""
	I0415 20:31:55.033196    6588 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 20:31:55.033196    6588 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0415 20:31:55.033196    6588 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.51.119 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-639400 NodeName:pause-639400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.51.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.51.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0415 20:31:55.033196    6588 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.51.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "pause-639400"
	  kubeletExtraArgs:
	    node-ip: 172.19.51.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.51.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0415 20:31:55.051363    6588 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0415 20:31:55.072848    6588 binaries.go:44] Found k8s binaries, skipping transfer
	I0415 20:31:55.087655    6588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0415 20:31:55.108234    6588 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0415 20:31:55.147069    6588 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0415 20:31:55.184298    6588 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0415 20:31:55.236891    6588 ssh_runner.go:195] Run: grep 172.19.51.119	control-plane.minikube.internal$ /etc/hosts
	I0415 20:31:55.257798    6588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 20:31:55.577924    6588 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 20:31:55.628392    6588 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-639400 for IP: 172.19.51.119
	I0415 20:31:55.628392    6588 certs.go:194] generating shared ca certs ...
	I0415 20:31:55.628392    6588 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 20:31:55.629507    6588 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0415 20:31:55.629893    6588 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0415 20:31:55.630135    6588 certs.go:256] generating profile certs ...
	I0415 20:31:55.631075    6588 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-639400\client.key
	I0415 20:31:55.631554    6588 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-639400\apiserver.key.76a7d360
	I0415 20:31:55.631885    6588 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-639400\proxy-client.key
	I0415 20:31:55.633715    6588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem (1338 bytes)
	W0415 20:31:55.634073    6588 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272_empty.pem, impossibly tiny 0 bytes
	I0415 20:31:55.634197    6588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0415 20:31:55.634626    6588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0415 20:31:55.634989    6588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0415 20:31:55.635243    6588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0415 20:31:55.635748    6588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem (1708 bytes)
	I0415 20:31:55.638122    6588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0415 20:31:55.722013    6588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0415 20:31:55.786679    6588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0415 20:31:55.853278    6588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0415 20:31:55.917882    6588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-639400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0415 20:31:55.977008    6588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-639400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0415 20:31:56.050583    6588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-639400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0415 20:31:56.108920    6588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-639400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0415 20:31:56.212030    6588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0415 20:31:56.285235    6588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11272.pem --> /usr/share/ca-certificates/11272.pem (1338 bytes)
	I0415 20:31:56.391567    6588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\112722.pem --> /usr/share/ca-certificates/112722.pem (1708 bytes)
	I0415 20:31:56.483569    6588 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0415 20:31:56.543784    6588 ssh_runner.go:195] Run: openssl version
	I0415 20:31:56.578123    6588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0415 20:31:56.616115    6588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0415 20:31:56.630798    6588 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I0415 20:31:56.646785    6588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0415 20:31:56.671776    6588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0415 20:31:56.731762    6588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11272.pem && ln -fs /usr/share/ca-certificates/11272.pem /etc/ssl/certs/11272.pem"
	I0415 20:31:56.782784    6588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11272.pem
	I0415 20:31:56.792651    6588 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 17:58 /usr/share/ca-certificates/11272.pem
	I0415 20:31:56.811937    6588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11272.pem
	I0415 20:31:56.846122    6588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11272.pem /etc/ssl/certs/51391683.0"
	I0415 20:31:56.884428    6588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112722.pem && ln -fs /usr/share/ca-certificates/112722.pem /etc/ssl/certs/112722.pem"
	I0415 20:31:56.922649    6588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112722.pem
	I0415 20:31:56.930891    6588 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 17:58 /usr/share/ca-certificates/112722.pem
	I0415 20:31:56.948526    6588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112722.pem
	I0415 20:31:56.978151    6588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112722.pem /etc/ssl/certs/3ec20f2e.0"
	I0415 20:31:57.016004    6588 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 20:31:57.047623    6588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0415 20:31:57.081362    6588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0415 20:31:57.108596    6588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0415 20:31:57.133984    6588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0415 20:31:57.162391    6588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0415 20:31:57.187246    6588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0415 20:31:57.199123    6588 kubeadm.go:391] StartCluster: {Name:pause-639400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:pause-639400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.51.119 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fa
lse olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 20:31:57.210668    6588 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0415 20:31:57.254993    6588 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0415 20:31:57.281569    6588 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0415 20:31:57.281569    6588 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0415 20:31:57.281569    6588 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0415 20:31:57.306512    6588 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0415 20:31:57.330138    6588 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0415 20:31:57.331373    6588 kubeconfig.go:125] found "pause-639400" server: "https://172.19.51.119:8443"
	I0415 20:31:57.333821    6588 kapi.go:59] client config for pause-639400: &rest.Config{Host:"https://172.19.51.119:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\pause-639400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\pause-639400\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f71600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0415 20:31:57.350813    6588 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0415 20:31:57.372079    6588 kubeadm.go:624] The running cluster does not require reconfiguration: 172.19.51.119
	I0415 20:31:57.372079    6588 kubeadm.go:1154] stopping kube-system containers ...
	I0415 20:31:57.390614    6588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0415 20:31:57.425001    6588 docker.go:483] Stopping containers: [f7c69ba56b35 70c9b9569296 536e44a3ca8c c25d605c97db 73a6040fac7c cb3309317a35 a7060982afc5 ab80b633b2e2 812cecb347b0 31d138edd651 3b1b5874be38 e438a3c0365a a59de631b568 2545ad1379f8 bdc5d8c48918 7b591b22ad5d]
	I0415 20:31:57.437244    6588 ssh_runner.go:195] Run: docker stop f7c69ba56b35 70c9b9569296 536e44a3ca8c c25d605c97db 73a6040fac7c cb3309317a35 a7060982afc5 ab80b633b2e2 812cecb347b0 31d138edd651 3b1b5874be38 e438a3c0365a a59de631b568 2545ad1379f8 bdc5d8c48918 7b591b22ad5d
	I0415 20:32:06.869662    6588 ssh_runner.go:235] Completed: docker stop f7c69ba56b35 70c9b9569296 536e44a3ca8c c25d605c97db 73a6040fac7c cb3309317a35 a7060982afc5 ab80b633b2e2 812cecb347b0 31d138edd651 3b1b5874be38 e438a3c0365a a59de631b568 2545ad1379f8 bdc5d8c48918 7b591b22ad5d: (9.4323442s)
	I0415 20:32:06.885074    6588 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0415 20:32:06.949071    6588 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0415 20:32:06.970711    6588 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5651 Apr 15 20:26 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Apr 15 20:26 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Apr 15 20:26 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Apr 15 20:26 /etc/kubernetes/scheduler.conf
	
	I0415 20:32:06.985066    6588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0415 20:32:07.023011    6588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0415 20:32:07.061752    6588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0415 20:32:07.083980    6588 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0415 20:32:07.099252    6588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0415 20:32:07.136651    6588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0415 20:32:07.156554    6588 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0415 20:32:07.171386    6588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0415 20:32:07.208595    6588 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0415 20:32:07.230680    6588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0415 20:32:07.322353    6588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0415 20:32:08.629353    6588 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.3068757s)
	I0415 20:32:08.629353    6588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0415 20:32:09.011793    6588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0415 20:32:09.135862    6588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0415 20:32:09.275466    6588 api_server.go:52] waiting for apiserver process to appear ...
	I0415 20:32:09.290679    6588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 20:32:09.337742    6588 api_server.go:72] duration metric: took 62.2757ms to wait for apiserver process to appear ...
	I0415 20:32:09.337742    6588 api_server.go:88] waiting for apiserver healthz status ...
	I0415 20:32:09.337742    6588 api_server.go:253] Checking apiserver healthz at https://172.19.51.119:8443/healthz ...
	I0415 20:32:14.353125    6588 api_server.go:269] stopped: https://172.19.51.119:8443/healthz: Get "https://172.19.51.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 20:32:14.353125    6588 api_server.go:253] Checking apiserver healthz at https://172.19.51.119:8443/healthz ...
	I0415 20:32:19.369037    6588 api_server.go:269] stopped: https://172.19.51.119:8443/healthz: Get "https://172.19.51.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 20:32:19.369125    6588 api_server.go:253] Checking apiserver healthz at https://172.19.51.119:8443/healthz ...
	I0415 20:32:21.031789    6588 api_server.go:269] stopped: https://172.19.51.119:8443/healthz: Get "https://172.19.51.119:8443/healthz": read tcp 172.19.48.1:53343->172.19.51.119:8443: wsarecv: An existing connection was forcibly closed by the remote host.
	I0415 20:32:21.031854    6588 api_server.go:253] Checking apiserver healthz at https://172.19.51.119:8443/healthz ...
	I0415 20:32:23.067186    6588 api_server.go:269] stopped: https://172.19.51.119:8443/healthz: Get "https://172.19.51.119:8443/healthz": dial tcp 172.19.51.119:8443: connectex: No connection could be made because the target machine actively refused it.
	I0415 20:32:23.068308    6588 api_server.go:253] Checking apiserver healthz at https://172.19.51.119:8443/healthz ...
	I0415 20:32:25.106229    6588 api_server.go:269] stopped: https://172.19.51.119:8443/healthz: Get "https://172.19.51.119:8443/healthz": dial tcp 172.19.51.119:8443: connectex: No connection could be made because the target machine actively refused it.
	I0415 20:32:25.106229    6588 api_server.go:253] Checking apiserver healthz at https://172.19.51.119:8443/healthz ...
	I0415 20:32:28.864508    6588 api_server.go:279] https://172.19.51.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0415 20:32:28.864897    6588 api_server.go:103] status: https://172.19.51.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0415 20:32:28.864976    6588 api_server.go:253] Checking apiserver healthz at https://172.19.51.119:8443/healthz ...
	I0415 20:32:28.954403    6588 api_server.go:279] https://172.19.51.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0415 20:32:28.954403    6588 api_server.go:103] status: https://172.19.51.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0415 20:32:29.343419    6588 api_server.go:253] Checking apiserver healthz at https://172.19.51.119:8443/healthz ...
	I0415 20:32:29.352960    6588 api_server.go:279] https://172.19.51.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0415 20:32:29.353193    6588 api_server.go:103] status: https://172.19.51.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0415 20:32:29.850336    6588 api_server.go:253] Checking apiserver healthz at https://172.19.51.119:8443/healthz ...
	I0415 20:32:29.864191    6588 api_server.go:279] https://172.19.51.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0415 20:32:29.864374    6588 api_server.go:103] status: https://172.19.51.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0415 20:32:30.340479    6588 api_server.go:253] Checking apiserver healthz at https://172.19.51.119:8443/healthz ...
	I0415 20:32:30.350471    6588 api_server.go:279] https://172.19.51.119:8443/healthz returned 200:
	ok
	I0415 20:32:30.368896    6588 api_server.go:141] control plane version: v1.29.3
	I0415 20:32:30.368896    6588 api_server.go:131] duration metric: took 21.0309875s to wait for apiserver health ...
	I0415 20:32:30.368896    6588 cni.go:84] Creating CNI manager for ""
	I0415 20:32:30.368896    6588 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 20:32:30.372290    6588 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0415 20:32:30.390309    6588 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0415 20:32:30.408887    6588 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0415 20:32:30.444330    6588 system_pods.go:43] waiting for kube-system pods to appear ...
	I0415 20:32:30.492036    6588 system_pods.go:59] 6 kube-system pods found
	I0415 20:32:30.492036    6588 system_pods.go:61] "coredns-76f75df574-qwvw4" [3be11f2e-2668-4e51-8323-ac9c15cca9a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0415 20:32:30.492036    6588 system_pods.go:61] "etcd-pause-639400" [2b685dc9-d0e0-477e-950d-2ab09c060546] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0415 20:32:30.492036    6588 system_pods.go:61] "kube-apiserver-pause-639400" [d1dd0b03-86db-42d0-b085-f6a0ba2b15b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0415 20:32:30.492036    6588 system_pods.go:61] "kube-controller-manager-pause-639400" [551d2706-03ce-44d4-9259-23d3321e2b99] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0415 20:32:30.492036    6588 system_pods.go:61] "kube-proxy-rlncm" [8359a60d-b7eb-4782-880d-33a113ebdddb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0415 20:32:30.492036    6588 system_pods.go:61] "kube-scheduler-pause-639400" [48b71639-d504-4cb9-b8fe-a89af86ef70b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0415 20:32:30.492036    6588 system_pods.go:74] duration metric: took 47.6285ms to wait for pod list to return data ...
	I0415 20:32:30.492036    6588 node_conditions.go:102] verifying NodePressure condition ...
	I0415 20:32:30.507006    6588 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0415 20:32:30.507006    6588 node_conditions.go:123] node cpu capacity is 2
	I0415 20:32:30.507006    6588 node_conditions.go:105] duration metric: took 14.9696ms to run NodePressure ...
	I0415 20:32:30.508064    6588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0415 20:32:31.126885    6588 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0415 20:32:31.135465    6588 kubeadm.go:733] kubelet initialised
	I0415 20:32:31.135465    6588 kubeadm.go:734] duration metric: took 8.5795ms waiting for restarted kubelet to initialise ...
	I0415 20:32:31.135465    6588 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 20:32:31.144962    6588 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-qwvw4" in "kube-system" namespace to be "Ready" ...
	I0415 20:32:32.663677    6588 pod_ready.go:92] pod "coredns-76f75df574-qwvw4" in "kube-system" namespace has status "Ready":"True"
	I0415 20:32:32.663677    6588 pod_ready.go:81] duration metric: took 1.5187031s for pod "coredns-76f75df574-qwvw4" in "kube-system" namespace to be "Ready" ...
	I0415 20:32:32.663785    6588 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-639400" in "kube-system" namespace to be "Ready" ...
	I0415 20:32:34.686610    6588 pod_ready.go:102] pod "etcd-pause-639400" in "kube-system" namespace has status "Ready":"False"
	I0415 20:32:37.188430    6588 pod_ready.go:102] pod "etcd-pause-639400" in "kube-system" namespace has status "Ready":"False"
	I0415 20:32:39.676144    6588 pod_ready.go:102] pod "etcd-pause-639400" in "kube-system" namespace has status "Ready":"False"
	I0415 20:32:41.688269    6588 pod_ready.go:102] pod "etcd-pause-639400" in "kube-system" namespace has status "Ready":"False"

                                                
                                                
** /stderr **
pause_test.go:94: failed to second start a running minikube with args: "out/minikube-windows-amd64.exe start -p pause-639400 --alsologtostderr -v=1 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-639400 -n pause-639400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-639400 -n pause-639400: (13.2574854s)
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-639400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-639400 logs -n 25: (9.3548208s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|---------------------------|-------------------|----------------|---------------------|---------------------|
	| Command |               Args                |          Profile          |       User        |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------|---------------------------|-------------------|----------------|---------------------|---------------------|
	| ssh     | -p cilium-959000 sudo crio        | cilium-959000             | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 20:08 UTC |                     |
	|         | config                            |                           |                   |                |                     |                     |
	| delete  | -p cilium-959000                  | cilium-959000             | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 20:08 UTC | 15 Apr 24 20:08 UTC |
	| start   | -p force-systemd-env-298600       | force-systemd-env-298600  | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 20:08 UTC | 15 Apr 24 20:16 UTC |
	|         | --memory=2048                     |                           |                   |                |                     |                     |
	|         | --alsologtostderr -v=5            |                           |                   |                |                     |                     |
	|         | --driver=hyperv                   |                           |                   |                |                     |                     |
	| ssh     | force-systemd-flag-993800         | force-systemd-flag-993800 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 20:11 UTC | 15 Apr 24 20:11 UTC |
	|         | ssh docker info --format          |                           |                   |                |                     |                     |
	|         | {{.CgroupDriver}}                 |                           |                   |                |                     |                     |
	| delete  | -p force-systemd-flag-993800      | force-systemd-flag-993800 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 20:11 UTC | 15 Apr 24 20:12 UTC |
	| start   | -p running-upgrade-560000         | minikube                  | minikube6\jenkins | v1.26.0        | 15 Apr 24 20:12 GMT | 15 Apr 24 20:19 GMT |
	|         | --memory=2200                     |                           |                   |                |                     |                     |
	|         | --vm-driver=hyperv                |                           |                   |                |                     |                     |
	| delete  | -p NoKubernetes-993800            | NoKubernetes-993800       | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 20:13 UTC | 15 Apr 24 20:13 UTC |
	| start   | -p kubernetes-upgrade-982500      | kubernetes-upgrade-982500 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 20:13 UTC | 15 Apr 24 20:21 UTC |
	|         | --memory=2200                     |                           |                   |                |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                           |                   |                |                     |                     |
	|         | --alsologtostderr -v=1            |                           |                   |                |                     |                     |
	|         | --driver=hyperv                   |                           |                   |                |                     |                     |
	| delete  | -p offline-docker-993800          | offline-docker-993800     | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 20:15 UTC | 15 Apr 24 20:15 UTC |
	| start   | -p stopped-upgrade-505200         | minikube                  | minikube6\jenkins | v1.26.0        | 15 Apr 24 20:15 GMT | 15 Apr 24 20:24 GMT |
	|         | --memory=2200                     |                           |                   |                |                     |                     |
	|         | --vm-driver=hyperv                |                           |                   |                |                     |                     |
	| ssh     | force-systemd-env-298600          | force-systemd-env-298600  | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 20:16 UTC | 15 Apr 24 20:17 UTC |
	|         | ssh docker info --format          |                           |                   |                |                     |                     |
	|         | {{.CgroupDriver}}                 |                           |                   |                |                     |                     |
	| delete  | -p force-systemd-env-298600       | force-systemd-env-298600  | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 20:17 UTC | 15 Apr 24 20:17 UTC |
	| start   | -p pause-639400 --memory=2048     | pause-639400              | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 20:17 UTC | 15 Apr 24 20:27 UTC |
	|         | --install-addons=false            |                           |                   |                |                     |                     |
	|         | --wait=all --driver=hyperv        |                           |                   |                |                     |                     |
	| start   | -p running-upgrade-560000         | running-upgrade-560000    | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 20:19 UTC | 15 Apr 24 20:28 UTC |
	|         | --memory=2200                     |                           |                   |                |                     |                     |
	|         | --alsologtostderr -v=1            |                           |                   |                |                     |                     |
	|         | --driver=hyperv                   |                           |                   |                |                     |                     |
	| stop    | -p kubernetes-upgrade-982500      | kubernetes-upgrade-982500 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 20:21 UTC | 15 Apr 24 20:22 UTC |
	| start   | -p kubernetes-upgrade-982500      | kubernetes-upgrade-982500 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 20:22 UTC |                     |
	|         | --memory=2200                     |                           |                   |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2 |                           |                   |                |                     |                     |
	|         | --alsologtostderr -v=1            |                           |                   |                |                     |                     |
	|         | --driver=hyperv                   |                           |                   |                |                     |                     |
	| stop    | stopped-upgrade-505200 stop       | minikube                  | minikube6\jenkins | v1.26.0        | 15 Apr 24 20:24 GMT | 15 Apr 24 20:24 GMT |
	| start   | -p stopped-upgrade-505200         | stopped-upgrade-505200    | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 20:24 UTC | 15 Apr 24 20:31 UTC |
	|         | --memory=2200                     |                           |                   |                |                     |                     |
	|         | --alsologtostderr -v=1            |                           |                   |                |                     |                     |
	|         | --driver=hyperv                   |                           |                   |                |                     |                     |
	| start   | -p pause-639400                   | pause-639400              | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 20:27 UTC |                     |
	|         | --alsologtostderr -v=1            |                           |                   |                |                     |                     |
	|         | --driver=hyperv                   |                           |                   |                |                     |                     |
	| delete  | -p running-upgrade-560000         | running-upgrade-560000    | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 20:28 UTC | 15 Apr 24 20:29 UTC |
	| start   | -p cert-expiration-452700         | cert-expiration-452700    | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 20:29 UTC |                     |
	|         | --memory=2048                     |                           |                   |                |                     |                     |
	|         | --cert-expiration=3m              |                           |                   |                |                     |                     |
	|         | --driver=hyperv                   |                           |                   |                |                     |                     |
	| delete  | -p kubernetes-upgrade-982500      | kubernetes-upgrade-982500 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 20:29 UTC | 15 Apr 24 20:30 UTC |
	| start   | -p docker-flags-503400            | docker-flags-503400       | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 20:30 UTC |                     |
	|         | --cache-images=false              |                           |                   |                |                     |                     |
	|         | --memory=2048                     |                           |                   |                |                     |                     |
	|         | --install-addons=false            |                           |                   |                |                     |                     |
	|         | --wait=false                      |                           |                   |                |                     |                     |
	|         | --docker-env=FOO=BAR              |                           |                   |                |                     |                     |
	|         | --docker-env=BAZ=BAT              |                           |                   |                |                     |                     |
	|         | --docker-opt=debug                |                           |                   |                |                     |                     |
	|         | --docker-opt=icc=true             |                           |                   |                |                     |                     |
	|         | --alsologtostderr -v=5            |                           |                   |                |                     |                     |
	|         | --driver=hyperv                   |                           |                   |                |                     |                     |
	| delete  | -p stopped-upgrade-505200         | stopped-upgrade-505200    | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 20:31 UTC | 15 Apr 24 20:32 UTC |
	| start   | -p cert-options-218200            | cert-options-218200       | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 20:32 UTC |                     |
	|         | --memory=2048                     |                           |                   |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1         |                           |                   |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15     |                           |                   |                |                     |                     |
	|         | --apiserver-names=localhost       |                           |                   |                |                     |                     |
	|         | --apiserver-names=www.google.com  |                           |                   |                |                     |                     |
	|         | --apiserver-port=8555             |                           |                   |                |                     |                     |
	|         | --driver=hyperv                   |                           |                   |                |                     |                     |
	|---------|-----------------------------------|---------------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 20:32:28
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 20:32:28.965391    8808 out.go:291] Setting OutFile to fd 1920 ...
	I0415 20:32:28.965391    8808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 20:32:28.965391    8808 out.go:304] Setting ErrFile to fd 1876...
	I0415 20:32:28.965391    8808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 20:32:29.001059    8808 out.go:298] Setting JSON to false
	I0415 20:32:29.006058    8808 start.go:129] hostinfo: {"hostname":"minikube6","uptime":24875,"bootTime":1713188273,"procs":200,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0415 20:32:29.006058    8808 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 20:32:29.015064    8808 out.go:177] * [cert-options-218200] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0415 20:32:29.017046    8808 notify.go:220] Checking for updates...
	I0415 20:32:29.020058    8808 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 20:32:29.025068    8808 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 20:32:29.030063    8808 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0415 20:32:29.032063    8808 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 20:32:29.035057    8808 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 20:32:28.864508    6588 api_server.go:279] https://172.19.51.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0415 20:32:28.864897    6588 api_server.go:103] status: https://172.19.51.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0415 20:32:28.864976    6588 api_server.go:253] Checking apiserver healthz at https://172.19.51.119:8443/healthz ...
	I0415 20:32:28.954403    6588 api_server.go:279] https://172.19.51.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0415 20:32:28.954403    6588 api_server.go:103] status: https://172.19.51.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0415 20:32:29.343419    6588 api_server.go:253] Checking apiserver healthz at https://172.19.51.119:8443/healthz ...
	I0415 20:32:29.352960    6588 api_server.go:279] https://172.19.51.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0415 20:32:29.353193    6588 api_server.go:103] status: https://172.19.51.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0415 20:32:29.850336    6588 api_server.go:253] Checking apiserver healthz at https://172.19.51.119:8443/healthz ...
	I0415 20:32:29.864191    6588 api_server.go:279] https://172.19.51.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0415 20:32:29.864374    6588 api_server.go:103] status: https://172.19.51.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0415 20:32:30.340479    6588 api_server.go:253] Checking apiserver healthz at https://172.19.51.119:8443/healthz ...
	I0415 20:32:30.350471    6588 api_server.go:279] https://172.19.51.119:8443/healthz returned 200:
	ok
	I0415 20:32:30.368896    6588 api_server.go:141] control plane version: v1.29.3
	I0415 20:32:30.368896    6588 api_server.go:131] duration metric: took 21.0309875s to wait for apiserver health ...
	I0415 20:32:30.368896    6588 cni.go:84] Creating CNI manager for ""
	I0415 20:32:30.368896    6588 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 20:32:30.372290    6588 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0415 20:32:30.390309    6588 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0415 20:32:30.408887    6588 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0415 20:32:30.444330    6588 system_pods.go:43] waiting for kube-system pods to appear ...
	I0415 20:32:30.492036    6588 system_pods.go:59] 6 kube-system pods found
	I0415 20:32:30.492036    6588 system_pods.go:61] "coredns-76f75df574-qwvw4" [3be11f2e-2668-4e51-8323-ac9c15cca9a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0415 20:32:30.492036    6588 system_pods.go:61] "etcd-pause-639400" [2b685dc9-d0e0-477e-950d-2ab09c060546] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0415 20:32:30.492036    6588 system_pods.go:61] "kube-apiserver-pause-639400" [d1dd0b03-86db-42d0-b085-f6a0ba2b15b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0415 20:32:30.492036    6588 system_pods.go:61] "kube-controller-manager-pause-639400" [551d2706-03ce-44d4-9259-23d3321e2b99] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0415 20:32:30.492036    6588 system_pods.go:61] "kube-proxy-rlncm" [8359a60d-b7eb-4782-880d-33a113ebdddb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0415 20:32:30.492036    6588 system_pods.go:61] "kube-scheduler-pause-639400" [48b71639-d504-4cb9-b8fe-a89af86ef70b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0415 20:32:30.492036    6588 system_pods.go:74] duration metric: took 47.6285ms to wait for pod list to return data ...
	I0415 20:32:30.492036    6588 node_conditions.go:102] verifying NodePressure condition ...
	I0415 20:32:30.507006    6588 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0415 20:32:30.507006    6588 node_conditions.go:123] node cpu capacity is 2
	I0415 20:32:30.507006    6588 node_conditions.go:105] duration metric: took 14.9696ms to run NodePressure ...
	I0415 20:32:30.508064    6588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0415 20:32:31.126885    6588 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0415 20:32:31.135465    6588 kubeadm.go:733] kubelet initialised
	I0415 20:32:31.135465    6588 kubeadm.go:734] duration metric: took 8.5795ms waiting for restarted kubelet to initialise ...
	I0415 20:32:31.135465    6588 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 20:32:31.144962    6588 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-qwvw4" in "kube-system" namespace to be "Ready" ...
	I0415 20:32:28.480660    6396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-452700 ).state
	I0415 20:32:31.070499    6396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:32:31.070499    6396 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:32:31.070597    6396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-452700 ).networkadapters[0]).ipaddresses[0]
	I0415 20:32:29.038065    8808 config.go:182] Loaded profile config "cert-expiration-452700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 20:32:29.038065    8808 config.go:182] Loaded profile config "docker-flags-503400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 20:32:29.039071    8808 config.go:182] Loaded profile config "pause-639400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 20:32:29.039071    8808 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 20:32:35.004608    8808 out.go:177] * Using the hyperv driver based on user configuration
	I0415 20:32:35.008781    8808 start.go:297] selected driver: hyperv
	I0415 20:32:35.008781    8808 start.go:901] validating driver "hyperv" against <nil>
	I0415 20:32:35.008909    8808 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 20:32:35.066508    8808 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 20:32:35.067903    8808 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 20:32:35.067903    8808 cni.go:84] Creating CNI manager for ""
	I0415 20:32:35.067903    8808 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 20:32:35.067903    8808 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 20:32:35.067903    8808 start.go:340] cluster config:
	{Name:cert-options-218200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:cert-options-218200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.
0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 20:32:35.067903    8808 iso.go:125] acquiring lock: {Name:mkb11aac800c033551a31c7a773c0461f92e4459 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 20:32:35.070884    8808 out.go:177] * Starting "cert-options-218200" primary control-plane node in "cert-options-218200" cluster
	I0415 20:32:32.663677    6588 pod_ready.go:92] pod "coredns-76f75df574-qwvw4" in "kube-system" namespace has status "Ready":"True"
	I0415 20:32:32.663677    6588 pod_ready.go:81] duration metric: took 1.5187031s for pod "coredns-76f75df574-qwvw4" in "kube-system" namespace to be "Ready" ...
	I0415 20:32:32.663785    6588 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-639400" in "kube-system" namespace to be "Ready" ...
	I0415 20:32:34.686610    6588 pod_ready.go:102] pod "etcd-pause-639400" in "kube-system" namespace has status "Ready":"False"
	I0415 20:32:33.948767    6396 main.go:141] libmachine: [stdout =====>] : 
	I0415 20:32:33.948767    6396 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:32:34.962567    6396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-452700 ).state
	I0415 20:32:35.074614    8808 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 20:32:35.074614    8808 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 20:32:35.074614    8808 cache.go:56] Caching tarball of preloaded images
	I0415 20:32:35.074614    8808 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 20:32:35.074614    8808 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 20:32:35.075586    8808 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-options-218200\config.json ...
	I0415 20:32:35.075586    8808 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-options-218200\config.json: {Name:mkab8a894e411ba3d4f12ca77209cefc250e047b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 20:32:35.076595    8808 start.go:360] acquireMachinesLock for cert-options-218200: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 20:32:37.188430    6588 pod_ready.go:102] pod "etcd-pause-639400" in "kube-system" namespace has status "Ready":"False"
	I0415 20:32:39.676144    6588 pod_ready.go:102] pod "etcd-pause-639400" in "kube-system" namespace has status "Ready":"False"
	I0415 20:32:38.050785    6396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:32:38.050785    6396 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:32:38.050785    6396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-452700 ).networkadapters[0]).ipaddresses[0]
	I0415 20:32:40.819417    6396 main.go:141] libmachine: [stdout =====>] : 172.19.61.9
	
	I0415 20:32:40.819417    6396 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:32:40.819878    6396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-452700 ).state
	I0415 20:32:43.175561    6396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:32:43.176383    6396 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:32:43.176383    6396 machine.go:94] provisionDockerMachine start ...
	I0415 20:32:43.176486    6396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-452700 ).state
	I0415 20:32:45.567426    6396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:32:45.567426    6396 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:32:45.568241    6396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-452700 ).networkadapters[0]).ipaddresses[0]
	I0415 20:32:48.385600    6396 main.go:141] libmachine: [stdout =====>] : 172.19.61.9
	
	I0415 20:32:48.385652    6396 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:32:48.392687    6396 main.go:141] libmachine: Using SSH client type: native
	I0415 20:32:48.393291    6396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.19.61.9 22 <nil> <nil>}
	I0415 20:32:48.393291    6396 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 20:32:48.525759    6396 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0415 20:32:48.525759    6396 buildroot.go:166] provisioning hostname "cert-expiration-452700"
	I0415 20:32:48.525948    6396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-452700 ).state
	I0415 20:32:50.842178    6396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 20:32:50.842178    6396 main.go:141] libmachine: [stderr =====>] : 
	I0415 20:32:50.842776    6396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-452700 ).networkadapters[0]).ipaddresses[0]
	
	
	==> Docker <==
	Apr 15 20:32:25 pause-639400 dockerd[4641]: time="2024-04-15T20:32:25.198455938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 20:32:25 pause-639400 dockerd[4641]: time="2024-04-15T20:32:25.198693238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 20:32:25 pause-639400 dockerd[4641]: time="2024-04-15T20:32:25.215165638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 15 20:32:25 pause-639400 dockerd[4641]: time="2024-04-15T20:32:25.215669938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 20:32:25 pause-639400 dockerd[4641]: time="2024-04-15T20:32:25.215943138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 20:32:25 pause-639400 dockerd[4641]: time="2024-04-15T20:32:25.216581238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 20:32:29 pause-639400 cri-dockerd[4953]: time="2024-04-15T20:32:29Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Apr 15 20:32:30 pause-639400 dockerd[4641]: time="2024-04-15T20:32:30.946443397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 15 20:32:30 pause-639400 dockerd[4641]: time="2024-04-15T20:32:30.946529897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 20:32:30 pause-639400 dockerd[4641]: time="2024-04-15T20:32:30.946544897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 20:32:30 pause-639400 dockerd[4641]: time="2024-04-15T20:32:30.946648197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 20:32:30 pause-639400 dockerd[4641]: time="2024-04-15T20:32:30.981232917Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 15 20:32:30 pause-639400 dockerd[4641]: time="2024-04-15T20:32:30.981397317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 20:32:30 pause-639400 dockerd[4641]: time="2024-04-15T20:32:30.981420917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 20:32:30 pause-639400 dockerd[4641]: time="2024-04-15T20:32:30.981926417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 20:32:31 pause-639400 cri-dockerd[4953]: time="2024-04-15T20:32:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5ad5d2cdba1d6251bd05c3b57c1aa9d2d88a608b94d32d3801454ff39db8d605/resolv.conf as [nameserver 172.19.48.1]"
	Apr 15 20:32:31 pause-639400 cri-dockerd[4953]: time="2024-04-15T20:32:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f34dbac589adc8d2e5248d50396b05d0fe6624773df952a13f35b5e95ba2d104/resolv.conf as [nameserver 172.19.48.1]"
	Apr 15 20:32:31 pause-639400 dockerd[4641]: time="2024-04-15T20:32:31.564273626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 15 20:32:31 pause-639400 dockerd[4641]: time="2024-04-15T20:32:31.564396427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 20:32:31 pause-639400 dockerd[4641]: time="2024-04-15T20:32:31.564422127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 20:32:31 pause-639400 dockerd[4641]: time="2024-04-15T20:32:31.564680627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 20:32:31 pause-639400 dockerd[4641]: time="2024-04-15T20:32:31.771771978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 15 20:32:31 pause-639400 dockerd[4641]: time="2024-04-15T20:32:31.771875778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 15 20:32:31 pause-639400 dockerd[4641]: time="2024-04-15T20:32:31.771934178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 15 20:32:31 pause-639400 dockerd[4641]: time="2024-04-15T20:32:31.772971879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	43c2f5c02cca6       cbb01a7bd410d       33 seconds ago       Running             coredns                   2                   f34dbac589adc       coredns-76f75df574-qwvw4
	2e480d473b25c       a1d263b5dc5b0       33 seconds ago       Running             kube-proxy                1                   5ad5d2cdba1d6       kube-proxy-rlncm
	7cd6d98bec108       6052a25da3f97       41 seconds ago       Running             kube-controller-manager   2                   73eb6adb7c78e       kube-controller-manager-pause-639400
	5b9ff4194cb2d       39f995c9f1996       42 seconds ago       Running             kube-apiserver            2                   321c80d20576d       kube-apiserver-pause-639400
	f7d52864935a9       3861cfcd7c04c       42 seconds ago       Running             etcd                      2                   d36b989f4fede       etcd-pause-639400
	7701e7689cfa4       39f995c9f1996       About a minute ago   Exited              kube-apiserver            1                   321c80d20576d       kube-apiserver-pause-639400
	834720fc26d9c       6052a25da3f97       About a minute ago   Exited              kube-controller-manager   1                   73eb6adb7c78e       kube-controller-manager-pause-639400
	f2c82c1d24cbd       8c390d98f50c0       About a minute ago   Running             kube-scheduler            1                   a218dd9cd6440       kube-scheduler-pause-639400
	f7c69ba56b358       cbb01a7bd410d       About a minute ago   Exited              coredns                   1                   536e44a3ca8cb       coredns-76f75df574-qwvw4
	70c9b95692964       3861cfcd7c04c       About a minute ago   Exited              etcd                      1                   c25d605c97db6       etcd-pause-639400
	cb3309317a35f       a1d263b5dc5b0       6 minutes ago        Exited              kube-proxy                0                   ab80b633b2e26       kube-proxy-rlncm
	812cecb347b0e       8c390d98f50c0       6 minutes ago        Exited              kube-scheduler            0                   2545ad1379f83       kube-scheduler-pause-639400
	
	
	==> coredns [43c2f5c02cca] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e2b9de1191510a72356755223f06623b152d8cdd72ea393cca47fb3d34a5414574050e77e521fd64fc84b7e18fcd0fb5ead79ecf0a5a8be221bd0ffeb8c0080c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35836 - 20016 "HINFO IN 1849281660848016047.2887555359585799563. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.051679763s
	
	
	==> coredns [f7c69ba56b35] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e2b9de1191510a72356755223f06623b152d8cdd72ea393cca47fb3d34a5414574050e77e521fd64fc84b7e18fcd0fb5ead79ecf0a5a8be221bd0ffeb8c0080c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:41217 - 60123 "HINFO IN 447316343633881251.2869301868367259576. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.036915812s
	
	
	==> describe nodes <==
	Name:               pause-639400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-639400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c
	                    minikube.k8s.io/name=pause-639400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_15T20_26_21_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 20:26:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-639400
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Apr 2024 20:32:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Apr 2024 20:32:29 +0000   Mon, 15 Apr 2024 20:26:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Apr 2024 20:32:29 +0000   Mon, 15 Apr 2024 20:26:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Apr 2024 20:32:29 +0000   Mon, 15 Apr 2024 20:26:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Apr 2024 20:32:29 +0000   Mon, 15 Apr 2024 20:26:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.51.119
	  Hostname:    pause-639400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015784Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd27d7b370c84b5085b117de30579086
	  System UUID:                d1013cd5-0d26-1b40-bda5-7977979ad48f
	  Boot ID:                    6edeca07-498a-4c63-afe2-bdd367d0d69a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-qwvw4                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m31s
	  kube-system                 etcd-pause-639400                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         6m43s
	  kube-system                 kube-apiserver-pause-639400             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m43s
	  kube-system                 kube-controller-manager-pause-639400    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m47s
	  kube-system                 kube-proxy-rlncm                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m31s
	  kube-system                 kube-scheduler-pause-639400             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m28s                  kube-proxy       
	  Normal  Starting                 32s                    kube-proxy       
	  Normal  Starting                 6m53s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m53s (x8 over 6m53s)  kubelet          Node pause-639400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m53s (x8 over 6m53s)  kubelet          Node pause-639400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m53s (x7 over 6m53s)  kubelet          Node pause-639400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     6m43s                  kubelet          Node pause-639400 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  6m43s                  kubelet          Node pause-639400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m43s                  kubelet          Node pause-639400 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  6m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m43s                  kubelet          Starting kubelet.
	  Normal  NodeReady                6m42s                  kubelet          Node pause-639400 status is now: NodeReady
	  Normal  RegisteredNode           6m31s                  node-controller  Node pause-639400 event: Registered Node pause-639400 in Controller
	  Normal  Starting                 55s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)      kubelet          Node pause-639400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)      kubelet          Node pause-639400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x7 over 55s)      kubelet          Node pause-639400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  55s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           23s                    node-controller  Node pause-639400 event: Registered Node pause-639400 in Controller
	
	
	==> dmesg <==
	[Apr15 20:26] systemd-fstab-generator[1511]: Ignoring "noauto" option for root device
	[  +7.559197] systemd-fstab-generator[1717]: Ignoring "noauto" option for root device
	[  +0.125916] kauditd_printk_skb: 73 callbacks suppressed
	[  +8.651160] hrtimer: interrupt took 3980267 ns
	[  +1.745379] systemd-fstab-generator[2124]: Ignoring "noauto" option for root device
	[  +0.174752] kauditd_printk_skb: 62 callbacks suppressed
	[ +12.826647] systemd-fstab-generator[2346]: Ignoring "noauto" option for root device
	[  +0.198790] kauditd_printk_skb: 12 callbacks suppressed
	[ +11.814792] kauditd_printk_skb: 88 callbacks suppressed
	[Apr15 20:31] systemd-fstab-generator[4210]: Ignoring "noauto" option for root device
	[  +0.776951] systemd-fstab-generator[4246]: Ignoring "noauto" option for root device
	[  +0.301995] systemd-fstab-generator[4259]: Ignoring "noauto" option for root device
	[  +0.449424] systemd-fstab-generator[4273]: Ignoring "noauto" option for root device
	[  +5.369062] kauditd_printk_skb: 87 callbacks suppressed
	[  +8.186485] systemd-fstab-generator[4831]: Ignoring "noauto" option for root device
	[  +0.258120] systemd-fstab-generator[4843]: Ignoring "noauto" option for root device
	[  +0.248138] systemd-fstab-generator[4855]: Ignoring "noauto" option for root device
	[  +0.337700] systemd-fstab-generator[4875]: Ignoring "noauto" option for root device
	[  +1.093697] systemd-fstab-generator[5099]: Ignoring "noauto" option for root device
	[  +0.073116] kauditd_printk_skb: 118 callbacks suppressed
	[Apr15 20:32] kauditd_printk_skb: 78 callbacks suppressed
	[  +2.052500] systemd-fstab-generator[5938]: Ignoring "noauto" option for root device
	[ +12.223452] kauditd_printk_skb: 17 callbacks suppressed
	[  +9.975453] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.637024] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [70c9b9569296] <==
	{"level":"info","ts":"2024-04-15T20:31:56.507686Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"14.201389ms"}
	{"level":"info","ts":"2024-04-15T20:31:56.524451Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-04-15T20:31:56.534386Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"f3d4a82da4b1b685","local-member-id":"1c0c1168998ddb0c","commit-index":599}
	{"level":"info","ts":"2024-04-15T20:31:56.534599Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1c0c1168998ddb0c switched to configuration voters=()"}
	{"level":"info","ts":"2024-04-15T20:31:56.535843Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1c0c1168998ddb0c became follower at term 2"}
	{"level":"info","ts":"2024-04-15T20:31:56.536463Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 1c0c1168998ddb0c [peers: [], term: 2, commit: 599, applied: 0, lastindex: 599, lastterm: 2]"}
	{"level":"warn","ts":"2024-04-15T20:31:56.541935Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-04-15T20:31:56.565146Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":524}
	{"level":"info","ts":"2024-04-15T20:31:56.576136Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-04-15T20:31:56.585098Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"1c0c1168998ddb0c","timeout":"7s"}
	{"level":"info","ts":"2024-04-15T20:31:56.585882Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"1c0c1168998ddb0c"}
	{"level":"info","ts":"2024-04-15T20:31:56.585918Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"1c0c1168998ddb0c","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-04-15T20:31:56.5864Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-04-15T20:31:56.587327Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-15T20:31:56.588663Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-15T20:31:56.588752Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-15T20:31:56.589476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1c0c1168998ddb0c switched to configuration voters=(2021009473732991756)"}
	{"level":"info","ts":"2024-04-15T20:31:56.589639Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f3d4a82da4b1b685","local-member-id":"1c0c1168998ddb0c","added-peer-id":"1c0c1168998ddb0c","added-peer-peer-urls":["https://172.19.51.119:2380"]}
	{"level":"info","ts":"2024-04-15T20:31:56.589914Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f3d4a82da4b1b685","local-member-id":"1c0c1168998ddb0c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T20:31:56.58995Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T20:31:56.598328Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-15T20:31:56.59857Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"1c0c1168998ddb0c","initial-advertise-peer-urls":["https://172.19.51.119:2380"],"listen-peer-urls":["https://172.19.51.119:2380"],"advertise-client-urls":["https://172.19.51.119:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.51.119:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-15T20:31:56.598595Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-15T20:31:56.598678Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.19.51.119:2380"}
	{"level":"info","ts":"2024-04-15T20:31:56.59869Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.19.51.119:2380"}
	
	
	==> etcd [f7d52864935a] <==
	{"level":"info","ts":"2024-04-15T20:32:25.801485Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-15T20:32:25.801496Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-15T20:32:25.805517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1c0c1168998ddb0c switched to configuration voters=(2021009473732991756)"}
	{"level":"info","ts":"2024-04-15T20:32:25.806226Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f3d4a82da4b1b685","local-member-id":"1c0c1168998ddb0c","added-peer-id":"1c0c1168998ddb0c","added-peer-peer-urls":["https://172.19.51.119:2380"]}
	{"level":"info","ts":"2024-04-15T20:32:25.809543Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f3d4a82da4b1b685","local-member-id":"1c0c1168998ddb0c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T20:32:25.809719Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T20:32:25.830415Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-15T20:32:25.830978Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.19.51.119:2380"}
	{"level":"info","ts":"2024-04-15T20:32:25.831405Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.19.51.119:2380"}
	{"level":"info","ts":"2024-04-15T20:32:25.832071Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"1c0c1168998ddb0c","initial-advertise-peer-urls":["https://172.19.51.119:2380"],"listen-peer-urls":["https://172.19.51.119:2380"],"advertise-client-urls":["https://172.19.51.119:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.51.119:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-15T20:32:25.832356Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-15T20:32:26.691327Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1c0c1168998ddb0c is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-15T20:32:26.691656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1c0c1168998ddb0c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-15T20:32:26.691933Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1c0c1168998ddb0c received MsgPreVoteResp from 1c0c1168998ddb0c at term 2"}
	{"level":"info","ts":"2024-04-15T20:32:26.692246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1c0c1168998ddb0c became candidate at term 3"}
	{"level":"info","ts":"2024-04-15T20:32:26.692466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1c0c1168998ddb0c received MsgVoteResp from 1c0c1168998ddb0c at term 3"}
	{"level":"info","ts":"2024-04-15T20:32:26.692636Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1c0c1168998ddb0c became leader at term 3"}
	{"level":"info","ts":"2024-04-15T20:32:26.692818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1c0c1168998ddb0c elected leader 1c0c1168998ddb0c at term 3"}
	{"level":"info","ts":"2024-04-15T20:32:26.935697Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-15T20:32:26.935637Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"1c0c1168998ddb0c","local-member-attributes":"{Name:pause-639400 ClientURLs:[https://172.19.51.119:2379]}","request-path":"/0/members/1c0c1168998ddb0c/attributes","cluster-id":"f3d4a82da4b1b685","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-15T20:32:26.941565Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.51.119:2379"}
	{"level":"info","ts":"2024-04-15T20:32:26.942647Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-15T20:32:26.944412Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-15T20:32:26.944627Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-15T20:32:26.94678Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:33:04 up 9 min,  0 users,  load average: 1.11, 0.78, 0.38
	Linux pause-639400 5.10.207 #1 SMP Mon Apr 15 15:01:07 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5b9ff4194cb2] <==
	I0415 20:32:28.865006       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0415 20:32:28.865016       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0415 20:32:28.865028       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0415 20:32:28.972890       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0415 20:32:29.016554       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0415 20:32:29.017092       1 shared_informer.go:318] Caches are synced for configmaps
	I0415 20:32:29.017494       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0415 20:32:29.018463       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0415 20:32:29.017521       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0415 20:32:29.018929       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0415 20:32:29.034130       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0415 20:32:29.036456       1 aggregator.go:165] initial CRD sync complete...
	I0415 20:32:29.036508       1 autoregister_controller.go:141] Starting autoregister controller
	I0415 20:32:29.036529       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0415 20:32:29.036541       1 cache.go:39] Caches are synced for autoregister controller
	I0415 20:32:29.056846       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0415 20:32:29.724116       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0415 20:32:30.163759       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.51.119]
	I0415 20:32:30.166764       1 controller.go:624] quota admission added evaluator for: endpoints
	I0415 20:32:30.176958       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0415 20:32:30.790082       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0415 20:32:30.822383       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0415 20:32:30.944148       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0415 20:32:31.068686       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0415 20:32:31.108890       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [7701e7689cfa] <==
	I0415 20:32:00.556158       1 server.go:148] Version: v1.29.3
	I0415 20:32:00.556345       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0415 20:32:01.018775       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0415 20:32:01.026911       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0415 20:32:01.027005       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0415 20:32:01.027419       1 instance.go:297] Using reconciler: lease
	I0415 20:32:01.028369       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	W0415 20:32:01.028832       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 20:32:01.029075       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 20:32:02.020632       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 20:32:02.029419       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 20:32:02.029438       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 20:32:03.554320       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 20:32:03.699595       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 20:32:03.819987       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 20:32:05.790496       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 20:32:05.958928       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 20:32:06.677081       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 20:32:09.522439       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 20:32:09.632897       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 20:32:10.705382       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 20:32:15.807851       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 20:32:16.122423       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 20:32:18.378996       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0415 20:32:21.028771       1 instance.go:290] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [7cd6d98bec10] <==
	I0415 20:32:41.653016       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0415 20:32:41.653225       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0415 20:32:41.656230       1 shared_informer.go:318] Caches are synced for taint-eviction-controller
	I0415 20:32:41.661270       1 shared_informer.go:318] Caches are synced for daemon sets
	I0415 20:32:41.661923       1 shared_informer.go:318] Caches are synced for PV protection
	I0415 20:32:41.664795       1 shared_informer.go:318] Caches are synced for crt configmap
	I0415 20:32:41.665106       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0415 20:32:41.667129       1 shared_informer.go:318] Caches are synced for attach detach
	I0415 20:32:41.673575       1 shared_informer.go:318] Caches are synced for taint
	I0415 20:32:41.674384       1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone=""
	I0415 20:32:41.675167       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-639400"
	I0415 20:32:41.675598       1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0415 20:32:41.676729       1 event.go:376] "Event occurred" object="pause-639400" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-639400 event: Registered Node pause-639400 in Controller"
	I0415 20:32:41.687774       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0415 20:32:41.691133       1 shared_informer.go:318] Caches are synced for job
	I0415 20:32:41.691614       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0415 20:32:41.742732       1 shared_informer.go:318] Caches are synced for disruption
	I0415 20:32:41.751008       1 shared_informer.go:318] Caches are synced for deployment
	I0415 20:32:41.757766       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0415 20:32:41.758141       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="161.9µs"
	I0415 20:32:41.849407       1 shared_informer.go:318] Caches are synced for resource quota
	I0415 20:32:41.864776       1 shared_informer.go:318] Caches are synced for resource quota
	I0415 20:32:42.214881       1 shared_informer.go:318] Caches are synced for garbage collector
	I0415 20:32:42.215048       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0415 20:32:42.228798       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [834720fc26d9] <==
	I0415 20:32:00.538894       1 serving.go:380] Generated self-signed cert in-memory
	I0415 20:32:01.227366       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0415 20:32:01.227615       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0415 20:32:01.234703       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0415 20:32:01.235154       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0415 20:32:01.235647       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0415 20:32:01.236966       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0415 20:32:22.039850       1 controllermanager.go:232] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://172.19.51.119:8443/healthz\": dial tcp 172.19.51.119:8443: connect: connection refused"
	
	
	==> kube-proxy [2e480d473b25] <==
	I0415 20:32:31.861239       1 server_others.go:72] "Using iptables proxy"
	I0415 20:32:31.889085       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.51.119"]
	I0415 20:32:32.006273       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0415 20:32:32.006349       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0415 20:32:32.006378       1 server_others.go:168] "Using iptables Proxier"
	I0415 20:32:32.012820       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0415 20:32:32.013551       1 server.go:865] "Version info" version="v1.29.3"
	I0415 20:32:32.013605       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0415 20:32:32.017002       1 config.go:188] "Starting service config controller"
	I0415 20:32:32.017064       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0415 20:32:32.017106       1 config.go:97] "Starting endpoint slice config controller"
	I0415 20:32:32.017121       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0415 20:32:32.018370       1 config.go:315] "Starting node config controller"
	I0415 20:32:32.018415       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0415 20:32:32.117607       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0415 20:32:32.117664       1 shared_informer.go:318] Caches are synced for service config
	I0415 20:32:32.118800       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [cb3309317a35] <==
	I0415 20:26:35.641799       1 server_others.go:72] "Using iptables proxy"
	I0415 20:26:35.732604       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.51.119"]
	I0415 20:26:35.954184       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0415 20:26:35.954375       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0415 20:26:35.954401       1 server_others.go:168] "Using iptables Proxier"
	I0415 20:26:35.960092       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0415 20:26:35.966303       1 server.go:865] "Version info" version="v1.29.3"
	I0415 20:26:35.966330       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0415 20:26:35.971081       1 config.go:97] "Starting endpoint slice config controller"
	I0415 20:26:35.972352       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0415 20:26:35.972403       1 config.go:188] "Starting service config controller"
	I0415 20:26:35.972413       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0415 20:26:35.974935       1 config.go:315] "Starting node config controller"
	I0415 20:26:35.975557       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0415 20:26:36.073447       1 shared_informer.go:318] Caches are synced for service config
	I0415 20:26:36.073447       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0415 20:26:36.076068       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [812cecb347b0] <==
	W0415 20:26:18.025379       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0415 20:26:18.025910       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0415 20:26:18.076665       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0415 20:26:18.077116       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0415 20:26:18.212563       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0415 20:26:18.213055       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0415 20:26:18.217750       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0415 20:26:18.217782       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0415 20:26:18.221650       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0415 20:26:18.221749       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0415 20:26:18.229373       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0415 20:26:18.229711       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0415 20:26:18.271389       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0415 20:26:18.271617       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0415 20:26:18.277511       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0415 20:26:18.277836       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0415 20:26:18.468684       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0415 20:26:18.469019       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0415 20:26:18.475993       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0415 20:26:18.476043       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0415 20:26:20.595072       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0415 20:31:40.187875       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0415 20:31:40.187964       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0415 20:31:40.190062       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0415 20:31:40.190854       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f2c82c1d24cb] <==
	E0415 20:32:28.903313       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0415 20:32:28.903442       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0415 20:32:28.903514       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0415 20:32:28.903775       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0415 20:32:28.904010       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0415 20:32:28.904266       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0415 20:32:28.903044       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0415 20:32:28.905448       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0415 20:32:28.904578       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0415 20:32:28.905779       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0415 20:32:28.904653       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0415 20:32:28.906345       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0415 20:32:28.904691       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0415 20:32:28.904772       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0415 20:32:28.911242       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0415 20:32:28.904931       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0415 20:32:28.911589       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0415 20:32:28.905002       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0415 20:32:28.912058       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0415 20:32:28.907349       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0415 20:32:28.912500       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0415 20:32:28.913008       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0415 20:32:28.939709       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0415 20:32:28.942245       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0415 20:32:41.653497       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 15 20:32:24 pause-639400 kubelet[5945]: E0415 20:32:24.503476    5945 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.51.119:8443: connect: connection refused
	Apr 15 20:32:24 pause-639400 kubelet[5945]: W0415 20:32:24.777957    5945 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.51.119:8443: connect: connection refused
	Apr 15 20:32:24 pause-639400 kubelet[5945]: E0415 20:32:24.778049    5945 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.51.119:8443: connect: connection refused
	Apr 15 20:32:24 pause-639400 kubelet[5945]: E0415 20:32:24.853239    5945 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-639400?timeout=10s\": dial tcp 172.19.51.119:8443: connect: connection refused" interval="3.2s"
	Apr 15 20:32:27 pause-639400 kubelet[5945]: I0415 20:32:27.054335    5945 kubelet_node_status.go:73] "Attempting to register node" node="pause-639400"
	Apr 15 20:32:29 pause-639400 kubelet[5945]: I0415 20:32:29.049683    5945 kubelet_node_status.go:112] "Node was previously registered" node="pause-639400"
	Apr 15 20:32:29 pause-639400 kubelet[5945]: I0415 20:32:29.049882    5945 kubelet_node_status.go:76] "Successfully registered node" node="pause-639400"
	Apr 15 20:32:29 pause-639400 kubelet[5945]: I0415 20:32:29.052665    5945 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 15 20:32:29 pause-639400 kubelet[5945]: I0415 20:32:29.055103    5945 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 15 20:32:29 pause-639400 kubelet[5945]: E0415 20:32:29.077737    5945 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"pause-639400\" not found"
	Apr 15 20:32:29 pause-639400 kubelet[5945]: E0415 20:32:29.178634    5945 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"pause-639400\" not found"
	Apr 15 20:32:29 pause-639400 kubelet[5945]: E0415 20:32:29.279213    5945 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"pause-639400\" not found"
	Apr 15 20:32:29 pause-639400 kubelet[5945]: E0415 20:32:29.380331    5945 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"pause-639400\" not found"
	Apr 15 20:32:29 pause-639400 kubelet[5945]: E0415 20:32:29.390603    5945 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"pause-639400\" not found"
	Apr 15 20:32:30 pause-639400 kubelet[5945]: I0415 20:32:30.191933    5945 apiserver.go:52] "Watching apiserver"
	Apr 15 20:32:30 pause-639400 kubelet[5945]: I0415 20:32:30.206140    5945 topology_manager.go:215] "Topology Admit Handler" podUID="3be11f2e-2668-4e51-8323-ac9c15cca9a3" podNamespace="kube-system" podName="coredns-76f75df574-qwvw4"
	Apr 15 20:32:30 pause-639400 kubelet[5945]: I0415 20:32:30.206340    5945 topology_manager.go:215] "Topology Admit Handler" podUID="8359a60d-b7eb-4782-880d-33a113ebdddb" podNamespace="kube-system" podName="kube-proxy-rlncm"
	Apr 15 20:32:30 pause-639400 kubelet[5945]: I0415 20:32:30.216841    5945 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Apr 15 20:32:30 pause-639400 kubelet[5945]: I0415 20:32:30.263639    5945 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8359a60d-b7eb-4782-880d-33a113ebdddb-xtables-lock\") pod \"kube-proxy-rlncm\" (UID: \"8359a60d-b7eb-4782-880d-33a113ebdddb\") " pod="kube-system/kube-proxy-rlncm"
	Apr 15 20:32:30 pause-639400 kubelet[5945]: I0415 20:32:30.263944    5945 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8359a60d-b7eb-4782-880d-33a113ebdddb-lib-modules\") pod \"kube-proxy-rlncm\" (UID: \"8359a60d-b7eb-4782-880d-33a113ebdddb\") " pod="kube-system/kube-proxy-rlncm"
	Apr 15 20:32:31 pause-639400 kubelet[5945]: E0415 20:32:31.096604    5945 kuberuntime_manager.go:1262] container &Container{Name:kube-proxy,Image:registry.k8s.io/kube-proxy:v1.29.3,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},Vo
lumeMount{Name:kube-api-access-klcmn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-proxy-rlncm_kube-system(8359a60d-b7eb-4782-880d-33a113ebdddb): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
	Apr 15 20:32:31 pause-639400 kubelet[5945]: E0415 20:32:31.096739    5945 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-rlncm" podUID="8359a60d-b7eb-4782-880d-33a113ebdddb"
	Apr 15 20:32:31 pause-639400 kubelet[5945]: I0415 20:32:31.223069    5945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f34dbac589adc8d2e5248d50396b05d0fe6624773df952a13f35b5e95ba2d104"
	Apr 15 20:32:31 pause-639400 kubelet[5945]: I0415 20:32:31.291879    5945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ad5d2cdba1d6251bd05c3b57c1aa9d2d88a608b94d32d3801454ff39db8d605"
	Apr 15 20:32:31 pause-639400 kubelet[5945]: I0415 20:32:31.293824    5945 scope.go:117] "RemoveContainer" containerID="cb3309317a35fb36a8d8ab931866161554aba7cbe8cfbee75d2e257559c8f7ba"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 20:32:55.798442    6460 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-639400 -n pause-639400
E0415 20:33:13.821474   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-639400 -n pause-639400: (13.2051126s)
helpers_test.go:261: (dbg) Run:  kubectl --context pause-639400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (362.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (10800.576s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-959000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperv
panic: test timed out after 3h0m0s
running tests:
	TestCertExpiration (9m20s)
	TestCertOptions (6m29s)
	TestNetworkPlugins (30m48s)
	TestNetworkPlugins/group/auto (4m56s)
	TestNetworkPlugins/group/auto/Start (4m56s)
	TestNetworkPlugins/group/calico (33s)
	TestNetworkPlugins/group/calico/Start (33s)
	TestStartStop (21m16s)

                                                
                                                
goroutine 2358 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 23 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000732b60, 0xc00088bbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000998438, {0x4e6f4a0, 0x2a, 0x2a}, {0x2bcbad5?, 0xad81af?, 0x4e91ca0?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0007eb860)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0007eb860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 12 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000151b80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 41 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 40
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 2333 [syscall, locked to thread]:
syscall.SyscallN(0xc002231b10?, {0xc002231b20?, 0xa37f45?, 0x4f1f0e0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x10000c002231bb8?, 0xc002231b80?, 0xa2fe76?, 0x4f1f0e0?, 0xc002231c08?, 0xa22a45?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x420, {0xc00097ecee?, 0x1312, 0xad42bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc000d4c008?, {0xc00097ecee?, 0xa5c25e?, 0x4000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc000d4c008, {0xc00097ecee, 0x1312, 0x1312})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000cd2098, {0xc00097ecee?, 0xc002231d98?, 0x2000?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0020ac360, {0x3b22aa0, 0xc0000a7138})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3b22be0, 0xc0020ac360}, {0x3b22aa0, 0xc0000a7138}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3b22be0, 0xc0020ac360})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xa20cf6?, {0x3b22be0?, 0xc0020ac360?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3b22be0, 0xc0020ac360}, {0x3b22b60, 0xc000cd2098}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0026b2ea0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2331
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2334 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc000d1ec60, 0xc0022d05a0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2331
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 184 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000c605a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 200
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 964 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3b46770, 0xc000106180}, 0xc000c7df50, 0xc000c7df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3b46770, 0xc000106180}, 0x90?, 0xc000c7df50, 0xc000c7df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3b46770?, 0xc000106180?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000c7dfd0?, 0xbae6e4?, 0xc00094e9c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1014
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 164 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000cd4250, 0x3c)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x2689880?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000c60480)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000cd4280)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000c033b0, {0x3b23ee0, 0xc000ccd4d0}, 0x1, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000c033b0, 0x3b9aca00, 0x0, 0x1, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 185
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2354 [syscall, locked to thread]:
syscall.SyscallN(0x7ffaf0474de0?, {0xc000d3fbd0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x2dc, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0023d4c90)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000c10c60)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000c10c60)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc000d56000, 0xc000c10c60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc000d56000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc000d56000, 0xc00219e1e0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2106
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 165 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3b46770, 0xc000106180}, 0xc002235f50, 0xc002235f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3b46770, 0xc000106180}, 0x20?, 0xc002235f50, 0xc002235f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3b46770?, 0xc000106180?}, 0x0?, 0xb67f40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xbae685?, 0xc000d1ef20?, 0xc002208120?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 185
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 166 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 165
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 700 [syscall, 7 minutes, locked to thread]:
syscall.SyscallN(0x7ffaf0474de0?, {0xc0021c1808?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x518, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc000cd8840)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000c10b00)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000c10b00)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0027c8000, 0xc000c10b00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertOptions(0xc0027c8000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:49 +0x445
testing.tRunner(0xc0027c8000, 0x35d7bf0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 701 [syscall, locked to thread]:
syscall.SyscallN(0x7ffaf0474de0?, {0xc0026739a8?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x77c, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc002b66c60)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000d1f080)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000d1f080)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0027c84e0, 0xc000d1f080)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertExpiration(0xc0027c84e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:131 +0x576
testing.tRunner(0xc0027c84e0, 0x35d7be8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 185 [chan receive, 172 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000cd4280, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 200
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2329 [syscall, 7 minutes, locked to thread]:
syscall.SyscallN(0xc002223b30?, {0xc002223b20?, 0xa37f45?, 0x4f1f0e0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x4e3f4c0?, 0xc002223b80?, 0xa2fe76?, 0x4f1f0e0?, 0xc002223c08?, 0xa22a45?, 0x15f7c570a28?, 0x41?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x52c, {0xc000cf093a?, 0x2c6, 0xc000cf0800?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc002385908?, {0xc000cf093a?, 0x0?, 0x400?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002385908, {0xc000cf093a, 0x2c6, 0x2c6})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0005c9240, {0xc000cf093a?, 0x15f7c57da88?, 0x13a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0027361e0, {0x3b22aa0, 0xc0000a6058})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3b22be0, 0xc0027361e0}, {0x3b22aa0, 0xc0000a6058}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3b22be0, 0xc0027361e0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xa20cf6?, {0x3b22be0?, 0xc0027361e0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3b22be0, 0xc0027361e0}, {0x3b22b60, 0xc0005c9240}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0025c02a0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 700
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2103 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc0005b76d0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0027c9ba0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0027c9ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0027c9ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0027c9ba0, 0xc0007b4180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2065
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2098 [chan receive, 5 minutes]:
testing.(*T).Run(0xc0027c9380, {0x2b710f8?, 0x3b1cae0?}, 0xc0020ac000)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0027c9380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5de
testing.tRunner(0xc0027c9380, 0xc000880b00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2065
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2105 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc0005b76d0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000160340)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000160340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000160340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000160340, 0xc0007b4280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2065
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2079 [chan receive, 21 minutes]:
testing.(*T).Run(0xc000d57860, {0x2b710f3?, 0xb67613?}, 0x35d7ef0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc000d57860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc000d57860, 0x35d7d18)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2338 [syscall, locked to thread]:
syscall.SyscallN(0x0?, {0xc0021f3b20?, 0xa37f45?, 0x4f1f0e0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc000003159?, 0xc0021f3b80?, 0xa2fe76?, 0x4f1f0e0?, 0xc0021f3c08?, 0xa22a45?, 0x15f7c570108?, 0xc00242704d?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x4e0, {0xc0007d020c?, 0x5f4, 0xc0007d0000?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc002139908?, {0xc0007d020c?, 0xa5c25e?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002139908, {0xc0007d020c, 0x5f4, 0x5f4})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000a6f50, {0xc0007d020c?, 0xc0021f3d98?, 0x20c?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00021b1a0, {0x3b22aa0, 0xc000cd20b0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3b22be0, 0xc00021b1a0}, {0x3b22aa0, 0xc000cd20b0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3b22be0, 0xc00021b1a0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xa20cf6?, {0x3b22be0?, 0xc00021b1a0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3b22be0, 0xc00021b1a0}, {0x3b22b60, 0xc0000a6f50}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0022093e0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 701
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2104 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc0005b76d0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0027c9d40)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0027c9d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0027c9d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0027c9d40, 0xc0007b4200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2065
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2340 [select]:
os/exec.(*Cmd).watchCtx(0xc000d1f080, 0xc0025c0300)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 701
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2106 [chan receive]:
testing.(*T).Run(0xc000160680, {0x2b710f8?, 0x3b1cae0?}, 0xc00219e1e0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000160680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5de
testing.tRunner(0xc000160680, 0xc0007b4480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2065
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2101 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc0005b76d0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0027c9860)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0027c9860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0027c9860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0027c9860, 0xc0007b4080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2065
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2102 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc0005b76d0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0027c9a00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0027c9a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0027c9a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0027c9a00, 0xc0007b4100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2065
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 965 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 964
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2099 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc0005b76d0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0027c9520)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0027c9520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0027c9520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0027c9520, 0xc000881800)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2065
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2339 [syscall, locked to thread]:
syscall.SyscallN(0x0?, {0xc002357b20?, 0x3b4f6f8?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc000213030?, 0xc000921000?, 0x2e?, 0x1000?, 0xc002357c08?, 0xa228db?, 0x0?, 0xc000921000?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x428, {0xc0008b2d3a?, 0x2c6, 0xc0008b2c00?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc002384288?, {0xc0008b2d3a?, 0xc002357d58?, 0x400?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002384288, {0xc0008b2d3a, 0x2c6, 0x2c6})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000a6fd8, {0xc0008b2d3a?, 0xc0029fa180?, 0x13a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00021b1d0, {0x3b22aa0, 0xc002244120})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3b22be0, 0xc00021b1d0}, {0x3b22aa0, 0xc002244120}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc002357e70?, {0x3b22be0, 0xc00021b1d0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc002357eb8?, {0x3b22be0?, 0xc00021b1d0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3b22be0, 0xc00021b1d0}, {0x3b22b60, 0xc0000a6fd8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002f046c0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 701
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 790 [IO wait, 160 minutes]:
internal/poll.runtime_pollWait(0x15f7df080a8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xa2fe76?, 0x4f1f0e0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc002384ca0, 0xc0025d5bb0)
	/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc002384c88, 0x258, {0xc000510000?, 0x0?, 0x0?}, 0xc000100008?)
	/usr/local/go/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc002384c88, 0xc0025d5d90)
	/usr/local/go/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc002384c88)
	/usr/local/go/src/net/fd_windows.go:178 +0x54
net.(*TCPListener).accept(0xc0005ae4e0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0005ae4e0)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0007420f0, {0x3b3a2f0, 0xc0005ae4e0})
	/usr/local/go/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0007420f0)
	/usr/local/go/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xbae6e4?, 0xc0021b04e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 737
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 1386 [chan send, 142 minutes]:
os/exec.(*Cmd).watchCtx(0xc002da6420, 0xc002d0c180)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 907
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2036 [chan receive, 31 minutes]:
testing.(*T).Run(0xc000d57380, {0x2b710f3?, 0xa8f56d?}, 0xc0025e0018)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc000d57380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc000d57380, 0x35d7cd0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2356 [syscall, locked to thread]:
syscall.SyscallN(0x2689f40?, {0xc0027abb20?, 0xa37f45?, 0x4f1f0e0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc0027abb98?, 0xc0027abb80?, 0xa2fe76?, 0x4f1f0e0?, 0xc0027abc08?, 0xa22a45?, 0x15f7c570598?, 0x67?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x3d8, {0xc0021f7c19?, 0x3e7, 0xad42bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0022fd688?, {0xc0021f7c19?, 0xa5c25e?, 0x2000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0022fd688, {0xc0021f7c19, 0x3e7, 0x3e7})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0022441d8, {0xc0021f7c19?, 0xc0027abd98?, 0x1000?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00219e300, {0x3b22aa0, 0xc000cd20c0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3b22be0, 0xc00219e300}, {0x3b22aa0, 0xc000cd20c0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3b22be0, 0xc00219e300})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xa20cf6?, {0x3b22be0?, 0xc00219e300?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3b22be0, 0xc00219e300}, {0x3b22b60, 0xc0022441d8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x35d7be8?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2354
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2357 [select]:
os/exec.(*Cmd).watchCtx(0xc000c10c60, 0xc0026b2360)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2354
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2261 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc0005b76d0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0021b0d00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0021b0d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0021b0d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0021b0d00, 0xc0020ae080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2260
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2100 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc0005b76d0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0027c96c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0027c96c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0027c96c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0027c96c0, 0xc000881c80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2065
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 963 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc000c58ad0, 0x35)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x2689880?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002122c00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000c58b00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000121a70, {0x3b23ee0, 0xc0025bea20}, 0x1, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000121a70, 0x3b9aca00, 0x0, 0x1, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1014
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2260 [chan receive, 21 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0021b0340, 0x35d7ef0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2079
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1140 [chan send, 148 minutes]:
os/exec.(*Cmd).watchCtx(0xc00264c840, 0xc00094ea80)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1139
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 1013 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002122d20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 892
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 1014 [chan receive, 150 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000c58b00, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 892
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2355 [syscall, locked to thread]:
syscall.SyscallN(0x0?, {0xc00234bb20?, 0xa37f45?, 0x4f1f0e0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x0?, 0xc00234bb80?, 0xa2fe76?, 0x4f1f0e0?, 0xc00234bc08?, 0xa228db?, 0xa18c66?, 0xc000c61f35?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x794, {0xc000cf05f3?, 0x20d, 0xad42bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0022fd188?, {0xc000cf05f3?, 0xa5c211?, 0x400?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0022fd188, {0xc000cf05f3, 0x20d, 0x20d})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc002244190, {0xc000cf05f3?, 0xc002450700?, 0x70?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00219e2a0, {0x3b22aa0, 0xc0000a70f8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3b22be0, 0xc00219e2a0}, {0x3b22aa0, 0xc0000a70f8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00234be78?, {0x3b22be0, 0xc00219e2a0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00234bf38?, {0x3b22be0?, 0xc00219e2a0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3b22be0, 0xc00219e2a0}, {0x3b22b60, 0xc002244190}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0025c01e0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2354
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2330 [select, 7 minutes]:
os/exec.(*Cmd).watchCtx(0xc000c10b00, 0xc0022d02a0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 700
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2331 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0x7ffaf0474de0?, {0xc002165bd0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x764, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc000cd88a0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000d1ec60)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000d1ec60)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc000d56b60, 0xc000d1ec60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc000d56b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc000d56b60, 0xc0020ac000)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2098
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2328 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0xc002253b40?, {0xc002253b20?, 0xa37f45?, 0x4f1f0e0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc000118219?, 0xc002253b80?, 0xa2fe76?, 0x4f1f0e0?, 0xc002253c08?, 0xa22a45?, 0x15f7c570108?, 0xc00092244d?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x470, {0xc000d07a44?, 0x5bc, 0xad42bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc002385188?, {0xc000d07a44?, 0x3bb?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002385188, {0xc000d07a44, 0x5bc, 0x5bc})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0005c9228, {0xc000d07a44?, 0x5?, 0x205?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0027360f0, {0x3b22aa0, 0xc000cd2048})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3b22be0, 0xc0027360f0}, {0x3b22aa0, 0xc000cd2048}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x4d968c0?, {0x3b22be0, 0xc0027360f0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xf?, {0x3b22be0?, 0xc0027360f0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3b22be0, 0xc0027360f0}, {0x3b22b60, 0xc0005c9228}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x35d7bf0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 700
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2065 [chan receive, 31 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0027c91e0, 0xc0025e0018)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2036
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2263 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc0005b76d0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0021b1040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0021b1040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0021b1040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0021b1040, 0xc0020ae100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2260
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2262 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc0005b76d0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0021b0ea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0021b0ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0021b0ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0021b0ea0, 0xc0020ae0c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2260
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2264 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc0005b76d0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0021b11e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0021b11e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0021b11e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0021b11e0, 0xc0020ae140)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2260
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2265 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc0005b76d0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0021b16c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0021b16c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0021b16c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0021b16c0, 0xc0020ae180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2260
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2266 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc0005b76d0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0021b1860)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0021b1860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0021b1860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0021b1860, 0xc0020ae200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2260
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2332 [syscall, locked to thread]:
syscall.SyscallN(0x0?, {0xc00251fb20?, 0xa37f45?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xa22db9?, 0xc00251fb80?, 0xa2fe76?, 0x4f1f0e0?, 0xc00251fc08?, 0xa228db?, 0xa18c66?, 0xc003d10000?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x40c, {0xc0007d0a2c?, 0x5d4, 0xad42bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc002385688?, {0xc0007d0a2c?, 0xa5c211?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002385688, {0xc0007d0a2c, 0x5d4, 0x5d4})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000cd2070, {0xc0007d0a2c?, 0xc00251fd98?, 0x22b?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0020ac330, {0x3b22aa0, 0xc0005c9328})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3b22be0, 0xc0020ac330}, {0x3b22aa0, 0xc0005c9328}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3b22be0, 0xc0020ac330})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xa20cf6?, {0x3b22be0?, 0xc0020ac330?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3b22be0, 0xc0020ac330}, {0x3b22b60, 0xc000cd2070}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000003080?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2331
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                    

Test pass (155/208)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 19.88
4 TestDownloadOnly/v1.20.0/preload-exists 0.09
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.33
9 TestDownloadOnly/v1.20.0/DeleteAll 1.46
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.41
12 TestDownloadOnly/v1.29.3/json-events 12.7
13 TestDownloadOnly/v1.29.3/preload-exists 0
16 TestDownloadOnly/v1.29.3/kubectl 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.3
18 TestDownloadOnly/v1.29.3/DeleteAll 1.44
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 1.38
21 TestDownloadOnly/v1.30.0-rc.2/json-events 12.78
22 TestDownloadOnly/v1.30.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.30.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.30.0-rc.2/LogsDuration 0.47
27 TestDownloadOnly/v1.30.0-rc.2/DeleteAll 1.46
28 TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds 1.31
30 TestBinaryMirror 7.58
31 TestOffline 459.72
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.36
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.36
36 TestAddons/Setup 408.34
39 TestAddons/parallel/Ingress 74.16
40 TestAddons/parallel/InspektorGadget 28.68
41 TestAddons/parallel/MetricsServer 23.05
42 TestAddons/parallel/HelmTiller 33.01
44 TestAddons/parallel/CSI 115.39
45 TestAddons/parallel/Headlamp 35.01
46 TestAddons/parallel/CloudSpanner 21.77
47 TestAddons/parallel/LocalPath 95.13
48 TestAddons/parallel/NvidiaDevicePlugin 21.17
49 TestAddons/parallel/Yakd 5.02
52 TestAddons/serial/GCPAuth/Namespaces 0.38
53 TestAddons/StoppedEnableDisable 58.28
56 TestDockerFlags 446.89
57 TestForceSystemdFlag 275.43
58 TestForceSystemdEnv 555.98
65 TestErrorSpam/start 18.81
66 TestErrorSpam/status 40.02
67 TestErrorSpam/pause 24.82
68 TestErrorSpam/unpause 25.19
69 TestErrorSpam/stop 65.58
72 TestFunctional/serial/CopySyncFile 0.03
73 TestFunctional/serial/StartWithProxy 256.3
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 134.49
76 TestFunctional/serial/KubeContext 0.14
77 TestFunctional/serial/KubectlGetPods 0.25
80 TestFunctional/serial/CacheCmd/cache/add_remote 28.37
81 TestFunctional/serial/CacheCmd/cache/add_local 12.18
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.31
83 TestFunctional/serial/CacheCmd/cache/list 0.3
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 10.19
85 TestFunctional/serial/CacheCmd/cache/cache_reload 39.76
86 TestFunctional/serial/CacheCmd/cache/delete 0.62
87 TestFunctional/serial/MinikubeKubectlCmd 0.53
89 TestFunctional/serial/ExtraConfig 137.59
90 TestFunctional/serial/ComponentHealth 0.2
91 TestFunctional/serial/LogsCmd 9.36
92 TestFunctional/serial/LogsFileCmd 11.63
93 TestFunctional/serial/InvalidService 22.5
99 TestFunctional/parallel/StatusCmd 47.65
103 TestFunctional/parallel/ServiceCmdConnect 37.42
104 TestFunctional/parallel/AddonsCmd 0.79
105 TestFunctional/parallel/PersistentVolumeClaim 44.32
107 TestFunctional/parallel/SSHCmd 24.67
108 TestFunctional/parallel/CpCmd 64.81
109 TestFunctional/parallel/MySQL 69.74
110 TestFunctional/parallel/FileSync 12.79
111 TestFunctional/parallel/CertSync 67.24
115 TestFunctional/parallel/NodeLabels 0.26
117 TestFunctional/parallel/NonActiveRuntimeDisabled 12.86
119 TestFunctional/parallel/License 4.14
120 TestFunctional/parallel/Version/short 0.29
121 TestFunctional/parallel/Version/components 9.47
122 TestFunctional/parallel/ImageCommands/ImageListShort 8.26
123 TestFunctional/parallel/ImageCommands/ImageListTable 8.39
124 TestFunctional/parallel/ImageCommands/ImageListJson 8.38
125 TestFunctional/parallel/ImageCommands/ImageListYaml 8.34
126 TestFunctional/parallel/ImageCommands/ImageBuild 29.56
127 TestFunctional/parallel/ImageCommands/Setup 4.87
128 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 26.42
129 TestFunctional/parallel/ProfileCmd/profile_not_create 12.53
130 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 23.51
131 TestFunctional/parallel/ProfileCmd/profile_list 12.07
132 TestFunctional/parallel/ProfileCmd/profile_json_output 12.29
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 31.46
135 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 10.46
136 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 15.68
144 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
145 TestFunctional/parallel/ServiceCmd/DeployApp 13.5
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 11.39
147 TestFunctional/parallel/ImageCommands/ImageRemove 17.92
148 TestFunctional/parallel/ServiceCmd/List 14.66
149 TestFunctional/parallel/ServiceCmd/JSONOutput 15.24
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 21.59
151 TestFunctional/parallel/DockerEnv/powershell 51.56
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 12.04
156 TestFunctional/parallel/UpdateContextCmd/no_changes 2.76
157 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.84
158 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.79
159 TestFunctional/delete_addon-resizer_images 0.5
160 TestFunctional/delete_my-image_image 0.19
161 TestFunctional/delete_minikube_cached_images 0.2
169 TestMultiControlPlane/serial/NodeLabels 0.21
175 TestImageBuild/serial/Setup 214.18
176 TestImageBuild/serial/NormalBuild 10.36
177 TestImageBuild/serial/BuildWithBuildArg 9.8
178 TestImageBuild/serial/BuildWithDockerIgnore 8.33
179 TestImageBuild/serial/BuildWithSpecifiedDockerfile 8.13
183 TestJSONOutput/start/Command 255.7
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 8.63
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 8.66
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 36.81
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 1.58
211 TestMainNoArgs 0.27
212 TestMinikubeProfile 562.59
215 TestMountStart/serial/StartWithMountFirst 166.68
216 TestMountStart/serial/VerifyMountFirst 10.26
217 TestMountStart/serial/StartWithMountSecond 166.75
218 TestMountStart/serial/VerifyMountSecond 10.27
219 TestMountStart/serial/DeleteFirst 29.19
220 TestMountStart/serial/VerifyMountPostDelete 10.16
221 TestMountStart/serial/Stop 28.46
222 TestMountStart/serial/RestartStopped 126.57
223 TestMountStart/serial/VerifyMountPostStop 10.12
226 TestMultiNode/serial/FreshStart2Nodes 450.81
227 TestMultiNode/serial/DeployApp2Nodes 9.43
229 TestMultiNode/serial/AddNode 247.64
230 TestMultiNode/serial/MultiNodeLabels 0.2
231 TestMultiNode/serial/ProfileList 10.55
232 TestMultiNode/serial/CopyFile 392.12
233 TestMultiNode/serial/StopNode 82.61
239 TestPreload 550.1
240 TestScheduledStopWindows 351.93
245 TestRunningBinaryUpgrade 1012.51
250 TestNoKubernetes/serial/StartNoK8sWithVersion 0.41
263 TestStoppedBinaryUpgrade/Setup 0.7
264 TestStoppedBinaryUpgrade/Upgrade 937.01
273 TestPause/serial/Start 574.02
275 TestStoppedBinaryUpgrade/MinikubeLogs 11.38
x
+
TestDownloadOnly/v1.20.0/json-events (19.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-994200 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-994200 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (19.8765972s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (19.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-994200
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-994200: exit status 85 (325.4698ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-994200 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:38 UTC |          |
	|         | -p download-only-994200        |                      |                   |                |                     |          |
	|         | --force --alsologtostderr      |                      |                   |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |                |                     |          |
	|         | --container-runtime=docker     |                      |                   |                |                     |          |
	|         | --driver=hyperv                |                      |                   |                |                     |          |
	|---------|--------------------------------|----------------------|-------------------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 17:38:58
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 17:38:58.319985    9576 out.go:291] Setting OutFile to fd 644 ...
	I0415 17:38:58.321019    9576 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:38:58.321019    9576 out.go:304] Setting ErrFile to fd 648...
	I0415 17:38:58.321019    9576 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 17:38:58.338807    9576 root.go:314] Error reading config file at C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0415 17:38:58.350440    9576 out.go:298] Setting JSON to true
	I0415 17:38:58.353467    9576 start.go:129] hostinfo: {"hostname":"minikube6","uptime":14465,"bootTime":1713188273,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0415 17:38:58.354457    9576 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 17:38:58.361133    9576 out.go:97] [download-only-994200] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0415 17:38:58.364139    9576 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	W0415 17:38:58.361992    9576 preload.go:294] Failed to list preload files: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0415 17:38:58.361992    9576 notify.go:220] Checking for updates...
	I0415 17:38:58.369656    9576 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0415 17:38:58.372304    9576 out.go:169] MINIKUBE_LOCATION=18634
	I0415 17:38:58.378585    9576 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0415 17:38:58.384584    9576 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0415 17:38:58.385545    9576 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 17:39:04.135365    9576 out.go:97] Using the hyperv driver based on user configuration
	I0415 17:39:04.135444    9576 start.go:297] selected driver: hyperv
	I0415 17:39:04.135519    9576 start.go:901] validating driver "hyperv" against <nil>
	I0415 17:39:04.135957    9576 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 17:39:04.192103    9576 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0415 17:39:04.193420    9576 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 17:39:04.193517    9576 cni.go:84] Creating CNI manager for ""
	I0415 17:39:04.193517    9576 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0415 17:39:04.193517    9576 start.go:340] cluster config:
	{Name:download-only-994200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-994200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 17:39:04.194233    9576 iso.go:125] acquiring lock: {Name:mkb11aac800c033551a31c7a773c0461f92e4459 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 17:39:04.198045    9576 out.go:97] Downloading VM boot image ...
	I0415 17:39:04.198259    9576 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.33.0-1713175573-18634-amd64.iso
	I0415 17:39:10.376995    9576 out.go:97] Starting "download-only-994200" primary control-plane node in "download-only-994200" cluster
	I0415 17:39:10.377832    9576 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0415 17:39:10.421571    9576 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0415 17:39:10.421689    9576 cache.go:56] Caching tarball of preloaded images
	I0415 17:39:10.422144    9576 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0415 17:39:10.432402    9576 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0415 17:39:10.432402    9576 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0415 17:39:10.519080    9576 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-994200 host does not exist
	  To start a cluster, run: "minikube start -p download-only-994200"

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 17:39:18.222087   10056 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.4554515s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-994200
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-994200: (1.4049483s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (12.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-472100 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-472100 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=hyperv: (12.7019303s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (12.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
--- PASS: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-472100
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-472100: exit status 85 (299.5546ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-994200 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:38 UTC |                     |
	|         | -p download-only-994200        |                      |                   |                |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |                |                     |                     |
	|         | --container-runtime=docker     |                      |                   |                |                     |                     |
	|         | --driver=hyperv                |                      |                   |                |                     |                     |
	| delete  | --all                          | minikube             | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:39 UTC | 15 Apr 24 17:39 UTC |
	| delete  | -p download-only-994200        | download-only-994200 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:39 UTC | 15 Apr 24 17:39 UTC |
	| start   | -o=json --download-only        | download-only-472100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:39 UTC |                     |
	|         | -p download-only-472100        |                      |                   |                |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |                   |                |                     |                     |
	|         | --container-runtime=docker     |                      |                   |                |                     |                     |
	|         | --driver=hyperv                |                      |                   |                |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 17:39:21
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 17:39:21.497892    5300 out.go:291] Setting OutFile to fd 764 ...
	I0415 17:39:21.498512    5300 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:39:21.498512    5300 out.go:304] Setting ErrFile to fd 768...
	I0415 17:39:21.498512    5300 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:39:21.525238    5300 out.go:298] Setting JSON to true
	I0415 17:39:21.528726    5300 start.go:129] hostinfo: {"hostname":"minikube6","uptime":14488,"bootTime":1713188273,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0415 17:39:21.528726    5300 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 17:39:21.689009    5300 out.go:97] [download-only-472100] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0415 17:39:21.689371    5300 notify.go:220] Checking for updates...
	I0415 17:39:21.692071    5300 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 17:39:21.694922    5300 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0415 17:39:21.697269    5300 out.go:169] MINIKUBE_LOCATION=18634
	I0415 17:39:21.699992    5300 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0415 17:39:21.704835    5300 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0415 17:39:21.705963    5300 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 17:39:27.563306    5300 out.go:97] Using the hyperv driver based on user configuration
	I0415 17:39:27.563306    5300 start.go:297] selected driver: hyperv
	I0415 17:39:27.563306    5300 start.go:901] validating driver "hyperv" against <nil>
	I0415 17:39:27.563764    5300 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 17:39:27.618195    5300 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0415 17:39:27.619776    5300 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 17:39:27.619957    5300 cni.go:84] Creating CNI manager for ""
	I0415 17:39:27.620081    5300 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 17:39:27.620081    5300 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 17:39:27.620347    5300 start.go:340] cluster config:
	{Name:download-only-472100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-472100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 17:39:27.620670    5300 iso.go:125] acquiring lock: {Name:mkb11aac800c033551a31c7a773c0461f92e4459 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 17:39:27.624011    5300 out.go:97] Starting "download-only-472100" primary control-plane node in "download-only-472100" cluster
	I0415 17:39:27.624011    5300 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 17:39:27.659310    5300 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 17:39:27.660074    5300 cache.go:56] Caching tarball of preloaded images
	I0415 17:39:27.660614    5300 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 17:39:27.663856    5300 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0415 17:39:27.663856    5300 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 ...
	I0415 17:39:27.725607    5300 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4?checksum=md5:2fedab548578a1509c0f422889c3109c -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-472100 host does not exist
	  To start a cluster, run: "minikube start -p download-only-472100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 17:39:34.108204    9184 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (1.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.4430703s)
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (1.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (1.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-472100
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-472100: (1.3778216s)
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (1.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/json-events (12.78s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-230500 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.2 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-230500 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.2 --container-runtime=docker --driver=hyperv: (12.7763573s)
--- PASS: TestDownloadOnly/v1.30.0-rc.2/json-events (12.78s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.30.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/LogsDuration (0.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-230500
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-230500: exit status 85 (471.9204ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| Command |               Args                |       Profile        |       User        |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-994200 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:38 UTC |                     |
	|         | -p download-only-994200           |                      |                   |                |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |                |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |                   |                |                     |                     |
	|         | --container-runtime=docker        |                      |                   |                |                     |                     |
	|         | --driver=hyperv                   |                      |                   |                |                     |                     |
	| delete  | --all                             | minikube             | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:39 UTC | 15 Apr 24 17:39 UTC |
	| delete  | -p download-only-994200           | download-only-994200 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:39 UTC | 15 Apr 24 17:39 UTC |
	| start   | -o=json --download-only           | download-only-472100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:39 UTC |                     |
	|         | -p download-only-472100           |                      |                   |                |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |                |                     |                     |
	|         | --kubernetes-version=v1.29.3      |                      |                   |                |                     |                     |
	|         | --container-runtime=docker        |                      |                   |                |                     |                     |
	|         | --driver=hyperv                   |                      |                   |                |                     |                     |
	| delete  | --all                             | minikube             | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:39 UTC | 15 Apr 24 17:39 UTC |
	| delete  | -p download-only-472100           | download-only-472100 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:39 UTC | 15 Apr 24 17:39 UTC |
	| start   | -o=json --download-only           | download-only-230500 | minikube6\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:39 UTC |                     |
	|         | -p download-only-230500           |                      |                   |                |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2 |                      |                   |                |                     |                     |
	|         | --container-runtime=docker        |                      |                   |                |                     |                     |
	|         | --driver=hyperv                   |                      |                   |                |                     |                     |
	|---------|-----------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 17:39:37
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 17:39:37.330268    1604 out.go:291] Setting OutFile to fd 760 ...
	I0415 17:39:37.330521    1604 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:39:37.330521    1604 out.go:304] Setting ErrFile to fd 784...
	I0415 17:39:37.330521    1604 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:39:37.354423    1604 out.go:298] Setting JSON to true
	I0415 17:39:37.358688    1604 start.go:129] hostinfo: {"hostname":"minikube6","uptime":14504,"bootTime":1713188273,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0415 17:39:37.358688    1604 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 17:39:37.364377    1604 out.go:97] [download-only-230500] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0415 17:39:37.364628    1604 notify.go:220] Checking for updates...
	I0415 17:39:37.366952    1604 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 17:39:37.369533    1604 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0415 17:39:37.372372    1604 out.go:169] MINIKUBE_LOCATION=18634
	I0415 17:39:37.374979    1604 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0415 17:39:37.379934    1604 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0415 17:39:37.379934    1604 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 17:39:43.368075    1604 out.go:97] Using the hyperv driver based on user configuration
	I0415 17:39:43.369116    1604 start.go:297] selected driver: hyperv
	I0415 17:39:43.369286    1604 start.go:901] validating driver "hyperv" against <nil>
	I0415 17:39:43.369668    1604 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 17:39:43.420344    1604 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0415 17:39:43.421352    1604 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 17:39:43.421352    1604 cni.go:84] Creating CNI manager for ""
	I0415 17:39:43.421352    1604 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 17:39:43.422135    1604 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 17:39:43.422177    1604 start.go:340] cluster config:
	{Name:download-only-230500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:download-only-230500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterva
l:1m0s}
	I0415 17:39:43.422177    1604 iso.go:125] acquiring lock: {Name:mkb11aac800c033551a31c7a773c0461f92e4459 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 17:39:43.425677    1604 out.go:97] Starting "download-only-230500" primary control-plane node in "download-only-230500" cluster
	I0415 17:39:43.426155    1604 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime docker
	I0415 17:39:43.462868    1604 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.2/preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0415 17:39:43.463148    1604 cache.go:56] Caching tarball of preloaded images
	I0415 17:39:43.463624    1604 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime docker
	I0415 17:39:43.468036    1604 out.go:97] Downloading Kubernetes v1.30.0-rc.2 preload ...
	I0415 17:39:43.468168    1604 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0415 17:39:43.539440    1604 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.2/preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:9834337eee074d8b5e25932a2917a549 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-230500 host does not exist
	  To start a cluster, run: "minikube start -p download-only-230500"

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 17:39:50.017307    2720 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-rc.2/LogsDuration (0.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/DeleteAll (1.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.4625365s)
--- PASS: TestDownloadOnly/v1.30.0-rc.2/DeleteAll (1.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds (1.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-230500
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-230500: (1.3048934s)
--- PASS: TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds (1.31s)

                                                
                                    
x
+
TestBinaryMirror (7.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-605800 --alsologtostderr --binary-mirror http://127.0.0.1:50128 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-605800 --alsologtostderr --binary-mirror http://127.0.0.1:50128 --driver=hyperv: (6.6382061s)
helpers_test.go:175: Cleaning up "binary-mirror-605800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-605800
--- PASS: TestBinaryMirror (7.58s)

                                                
                                    
x
+
TestOffline (459.72s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-993800 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-993800 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (6m51.6124155s)
helpers_test.go:175: Cleaning up "offline-docker-993800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-993800
E0415 20:15:10.552065   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-993800: (48.1099429s)
--- PASS: TestOffline (459.72s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.36s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-961400
addons_test.go:928: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-961400: exit status 85 (355.2433ms)

                                                
                                                
-- stdout --
	* Profile "addons-961400" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-961400"

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 17:40:04.968878    1940 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.36s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.36s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-961400
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-961400: exit status 85 (358.9286ms)

                                                
                                                
-- stdout --
	* Profile "addons-961400" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-961400"

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 17:40:04.967876   10120 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.36s)

                                                
                                    
x
+
TestAddons/Setup (408.34s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-961400 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-961400 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (6m48.3361006s)
--- PASS: TestAddons/Setup (408.34s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (74.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-961400 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-961400 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-961400 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6e4d2296-90df-40fb-a92c-7c5f725e78c4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6e4d2296-90df-40fb-a92c-7c5f725e78c4] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 15.009518s
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-961400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-961400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (10.7672634s)
addons_test.go:269: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-961400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0415 17:48:05.949550    8336 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:286: (dbg) Run:  kubectl --context addons-961400 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-961400 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p addons-961400 ip: (2.9361127s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 172.19.57.138
addons_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-961400 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe -p addons-961400 addons disable ingress-dns --alsologtostderr -v=1: (17.806518s)
addons_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-961400 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe -p addons-961400 addons disable ingress --alsologtostderr -v=1: (25.4378607s)
--- PASS: TestAddons/parallel/Ingress (74.16s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (28.68s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-z6m92" [90679ceb-5896-4763-98a1-41df5bcccd67] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0170495s
addons_test.go:841: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-961400
addons_test.go:841: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-961400: (22.6485248s)
--- PASS: TestAddons/parallel/InspektorGadget (28.68s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (23.05s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 24.8098ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-75d6c48ddd-cccrf" [9b4bd2c2-1db7-4ed4-bc57-9ec713d519da] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0169593s
addons_test.go:415: (dbg) Run:  kubectl --context addons-961400 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-961400 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-windows-amd64.exe -p addons-961400 addons disable metrics-server --alsologtostderr -v=1: (17.8098977s)
--- PASS: TestAddons/parallel/MetricsServer (23.05s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (33.01s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 9.3571ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-c97nd" [f5730ee6-1646-4c69-a454-1c22681d47f0] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.0113968s
addons_test.go:473: (dbg) Run:  kubectl --context addons-961400 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-961400 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (10.1449483s)
addons_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-961400 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Done: out/minikube-windows-amd64.exe -p addons-961400 addons disable helm-tiller --alsologtostderr -v=1: (16.8207443s)
--- PASS: TestAddons/parallel/HelmTiller (33.01s)

                                                
                                    
x
+
TestAddons/parallel/CSI (115.39s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 25.9884ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-961400 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-961400 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [4433b2fc-71a4-44c8-9c87-e68268289c31] Pending
helpers_test.go:344: "task-pv-pod" [4433b2fc-71a4-44c8-9c87-e68268289c31] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [4433b2fc-71a4-44c8-9c87-e68268289c31] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 23.0188758s
addons_test.go:584: (dbg) Run:  kubectl --context addons-961400 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-961400 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-961400 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-961400 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-961400 delete pod task-pv-pod: (1.6982321s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-961400 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-961400 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-961400 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [745dc322-ad86-4364-8f0b-3aab400da431] Pending
helpers_test.go:344: "task-pv-pod-restore" [745dc322-ad86-4364-8f0b-3aab400da431] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [745dc322-ad86-4364-8f0b-3aab400da431] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 11.0202358s
addons_test.go:626: (dbg) Run:  kubectl --context addons-961400 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-961400 delete pod task-pv-pod-restore: (1.4276977s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-961400 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-961400 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-961400 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-windows-amd64.exe -p addons-961400 addons disable csi-hostpath-driver --alsologtostderr -v=1: (23.7271605s)
addons_test.go:642: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-961400 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-windows-amd64.exe -p addons-961400 addons disable volumesnapshots --alsologtostderr -v=1: (16.8628091s)
--- PASS: TestAddons/parallel/CSI (115.39s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (35.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-961400 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-961400 --alsologtostderr -v=1: (16.998757s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5b77dbd7c4-zsj26" [37d8951f-d8f4-4b38-810b-4615deb3f72a] Pending
helpers_test.go:344: "headlamp-5b77dbd7c4-zsj26" [37d8951f-d8f4-4b38-810b-4615deb3f72a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5b77dbd7c4-zsj26" [37d8951f-d8f4-4b38-810b-4615deb3f72a] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 18.0118622s
--- PASS: TestAddons/parallel/Headlamp (35.01s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (21.77s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-fptfb" [88398910-1139-49f0-8b62-0bef8af21506] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0237177s
addons_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-961400
addons_test.go:860: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-961400: (16.7261836s)
--- PASS: TestAddons/parallel/CloudSpanner (21.77s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (95.13s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-961400 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-961400 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961400 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [cc68f864-35da-400f-ab57-41f570e2b67d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [cc68f864-35da-400f-ab57-41f570e2b67d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [cc68f864-35da-400f-ab57-41f570e2b67d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.0106427s
addons_test.go:891: (dbg) Run:  kubectl --context addons-961400 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-961400 ssh "cat /opt/local-path-provisioner/pvc-b043ef5f-899c-46dc-bf5d-2436443ceed8_default_test-pvc/file1"
addons_test.go:900: (dbg) Done: out/minikube-windows-amd64.exe -p addons-961400 ssh "cat /opt/local-path-provisioner/pvc-b043ef5f-899c-46dc-bf5d-2436443ceed8_default_test-pvc/file1": (11.2883372s)
addons_test.go:912: (dbg) Run:  kubectl --context addons-961400 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-961400 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-961400 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-windows-amd64.exe -p addons-961400 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (1m2.4342169s)
--- PASS: TestAddons/parallel/LocalPath (95.13s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (21.17s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-pczqg" [aee0ca2a-fbc4-4036-9d10-7bd560b85a6b] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0547659s
addons_test.go:955: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-961400
addons_test.go:955: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-961400: (16.1100802s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (21.17s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-p6bck" [279bf4bd-75ba-43a4-a589-1165fa81c257] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.0134852s
--- PASS: TestAddons/parallel/Yakd (5.02s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.38s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-961400 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-961400 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.38s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (58.28s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-961400
addons_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-961400: (44.492596s)
addons_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-961400
addons_test.go:176: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-961400: (5.5928408s)
addons_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-961400
addons_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-961400: (5.1414074s)
addons_test.go:185: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-961400
addons_test.go:185: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-961400: (3.0554976s)
--- PASS: TestAddons/StoppedEnableDisable (58.28s)

                                                
                                    
x
+
TestDockerFlags (446.89s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-503400 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-503400 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (6m17.6865201s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-503400 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-503400 ssh "sudo systemctl show docker --property=Environment --no-pager": (10.7166885s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-503400 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-503400 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (10.669035s)
helpers_test.go:175: Cleaning up "docker-flags-503400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-503400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-503400: (47.8179486s)
--- PASS: TestDockerFlags (446.89s)

                                                
                                    
x
+
TestForceSystemdFlag (275.43s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-993800 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-993800 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (3m36.6383029s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-993800 ssh "docker info --format {{.CgroupDriver}}"
E0415 20:11:53.596494   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-993800 ssh "docker info --format {{.CgroupDriver}}": (10.7836605s)
helpers_test.go:175: Cleaning up "force-systemd-flag-993800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-993800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-993800: (48.0090797s)
--- PASS: TestForceSystemdFlag (275.43s)

                                                
                                    
x
+
TestForceSystemdEnv (555.98s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-298600 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
E0415 20:10:10.555917   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-298600 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (8m23.7836755s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-298600 ssh "docker info --format {{.CgroupDriver}}"
E0415 20:16:53.598224   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-298600 ssh "docker info --format {{.CgroupDriver}}": (10.673505s)
helpers_test.go:175: Cleaning up "force-systemd-env-298600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-298600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-298600: (41.5224791s)
--- PASS: TestForceSystemdEnv (555.98s)

                                                
                                    
x
+
TestErrorSpam/start (18.81s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199200 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199200 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 start --dry-run: (6.2334671s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199200 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199200 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 start --dry-run: (6.2874315s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199200 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199200 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 start --dry-run: (6.289116s)
--- PASS: TestErrorSpam/start (18.81s)

                                                
                                    
x
+
TestErrorSpam/status (40.02s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199200 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199200 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 status: (13.7651287s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199200 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199200 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 status: (13.1467536s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199200 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199200 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 status: (13.1082222s)
--- PASS: TestErrorSpam/status (40.02s)

                                                
                                    
x
+
TestErrorSpam/pause (24.82s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199200 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199200 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 pause: (8.4063998s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199200 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199200 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 pause: (8.1035749s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199200 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199200 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 pause: (8.303732s)
--- PASS: TestErrorSpam/pause (24.82s)

                                                
                                    
x
+
TestErrorSpam/unpause (25.19s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199200 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199200 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 unpause: (8.420001s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199200 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199200 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 unpause: (8.3635429s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199200 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199200 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 unpause: (8.3989543s)
--- PASS: TestErrorSpam/unpause (25.19s)

                                                
                                    
x
+
TestErrorSpam/stop (65.58s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199200 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 stop
E0415 17:56:53.533773   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
E0415 17:57:21.360876   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199200 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 stop: (41.3910248s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199200 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199200 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 stop: (12.2268413s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199200 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199200 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199200 stop: (11.962927s)
--- PASS: TestErrorSpam/stop (65.58s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\11272\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (256.3s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-831100 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0415 18:01:53.542944   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-831100 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (4m16.2895966s)
--- PASS: TestFunctional/serial/StartWithProxy (256.30s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (134.49s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-831100 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-831100 --alsologtostderr -v=8: (2m14.4842882s)
functional_test.go:659: soft start took 2m14.4854171s for "functional-831100" cluster.
--- PASS: TestFunctional/serial/SoftStart (134.49s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.14s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-831100 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (28.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 cache add registry.k8s.io/pause:3.1: (9.6427478s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 cache add registry.k8s.io/pause:3.3: (9.3179676s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 cache add registry.k8s.io/pause:latest: (9.4094819s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (28.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (12.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-831100 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2784676631\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-831100 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2784676631\001: (2.5407319s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 cache add minikube-local-cache-test:functional-831100
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 cache add minikube-local-cache-test:functional-831100: (9.1019679s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 cache delete minikube-local-cache-test:functional-831100
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-831100
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (12.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (10.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 ssh sudo crictl images: (10.1905019s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (10.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (39.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 ssh sudo docker rmi registry.k8s.io/pause:latest: (10.1900302s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-831100 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (10.3328643s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:05:48.039905    3628 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 cache reload: (9.0348774s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (10.1965175s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (39.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.62s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 kubectl -- --context functional-831100 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (137.59s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-831100 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0415 18:08:16.733360   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-831100 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m17.5932926s)
functional_test.go:757: restart took 2m17.5932926s for "functional-831100" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (137.59s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-831100 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.20s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (9.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 logs: (9.3641281s)
--- PASS: TestFunctional/serial/LogsCmd (9.36s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (11.63s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3536581081\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3536581081\001\logs.txt: (11.6237856s)
--- PASS: TestFunctional/serial/LogsFileCmd (11.63s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (22.5s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-831100 apply -f testdata\invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-831100
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-831100: exit status 115 (18.1310174s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://172.19.62.76:31364 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:09:38.075030    7000 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_service_8fb87d8e79e761d215f3221b4a4d8a6300edfb06_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-831100 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (22.50s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (47.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 status: (16.509218s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (15.2589868s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 status -o json: (15.8788396s)
--- PASS: TestFunctional/parallel/StatusCmd (47.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (37.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-831100 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-831100 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-r5zgz" [3787eede-4f29-4f85-aaa4-335056b74237] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-r5zgz" [3787eede-4f29-4f85-aaa4-335056b74237] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 17.0114824s
functional_test.go:1645: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 service hello-node-connect --url
functional_test.go:1645: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 service hello-node-connect --url: (19.9036624s)
functional_test.go:1651: found endpoint for hello-node-connect: http://172.19.62.76:30689
functional_test.go:1671: http://172.19.62.76:30689: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-r5zgz

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.19.62.76:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.19.62.76:30689
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (37.42s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9494c9a1-8863-43cd-91b2-67524861807c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0090138s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-831100 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-831100 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-831100 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-831100 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b02e8a9e-77f9-43a7-bd43-e4e749c757a0] Pending
helpers_test.go:344: "sp-pod" [b02e8a9e-77f9-43a7-bd43-e4e749c757a0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b02e8a9e-77f9-43a7-bd43-e4e749c757a0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 28.0123609s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-831100 exec sp-pod -- touch /tmp/mount/foo
E0415 18:11:53.552388   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-831100 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-831100 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e70c3294-66af-4620-994c-bdae95887d3e] Pending
helpers_test.go:344: "sp-pod" [e70c3294-66af-4620-994c-bdae95887d3e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e70c3294-66af-4620-994c-bdae95887d3e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.0140118s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-831100 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.32s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (24.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 ssh "echo hello": (12.9027684s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 ssh "cat /etc/hostname": (11.7663819s)
--- PASS: TestFunctional/parallel/SSHCmd (24.67s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (64.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 cp testdata\cp-test.txt /home/docker/cp-test.txt: (8.5862735s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 ssh -n functional-831100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 ssh -n functional-831100 "sudo cat /home/docker/cp-test.txt": (11.6670739s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 cp functional-831100:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd2302178801\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 cp functional-831100:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd2302178801\001\cp-test.txt: (11.4144764s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 ssh -n functional-831100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 ssh -n functional-831100 "sudo cat /home/docker/cp-test.txt": (11.6457674s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (10.0513032s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 ssh -n functional-831100 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 ssh -n functional-831100 "sudo cat /tmp/does/not/exist/cp-test.txt": (11.4385519s)
--- PASS: TestFunctional/parallel/CpCmd (64.81s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (69.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-831100 replace --force -f testdata\mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-r5jqd" [72db4920-2657-4698-ae32-0c31596fc1df] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-r5jqd" [72db4920-2657-4698-ae32-0c31596fc1df] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 51.016421s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-831100 exec mysql-859648c796-r5jqd -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-831100 exec mysql-859648c796-r5jqd -- mysql -ppassword -e "show databases;": exit status 1 (498.358ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-831100 exec mysql-859648c796-r5jqd -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-831100 exec mysql-859648c796-r5jqd -- mysql -ppassword -e "show databases;": exit status 1 (397.3274ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-831100 exec mysql-859648c796-r5jqd -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-831100 exec mysql-859648c796-r5jqd -- mysql -ppassword -e "show databases;": exit status 1 (393.2137ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-831100 exec mysql-859648c796-r5jqd -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-831100 exec mysql-859648c796-r5jqd -- mysql -ppassword -e "show databases;": exit status 1 (449.871ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-831100 exec mysql-859648c796-r5jqd -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-831100 exec mysql-859648c796-r5jqd -- mysql -ppassword -e "show databases;": exit status 1 (493.6161ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-831100 exec mysql-859648c796-r5jqd -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (69.74s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (12.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/11272/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 ssh "sudo cat /etc/test/nested/copy/11272/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 ssh "sudo cat /etc/test/nested/copy/11272/hosts": (12.7940191s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (12.79s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (67.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/11272.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 ssh "sudo cat /etc/ssl/certs/11272.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 ssh "sudo cat /etc/ssl/certs/11272.pem": (11.8076865s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/11272.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 ssh "sudo cat /usr/share/ca-certificates/11272.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 ssh "sudo cat /usr/share/ca-certificates/11272.pem": (11.8791454s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 ssh "sudo cat /etc/ssl/certs/51391683.0": (11.7066765s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/112722.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 ssh "sudo cat /etc/ssl/certs/112722.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 ssh "sudo cat /etc/ssl/certs/112722.pem": (10.7806321s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/112722.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 ssh "sudo cat /usr/share/ca-certificates/112722.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 ssh "sudo cat /usr/share/ca-certificates/112722.pem": (10.7075117s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (10.3561478s)
--- PASS: TestFunctional/parallel/CertSync (67.24s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-831100 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (12.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-831100 ssh "sudo systemctl is-active crio": exit status 1 (12.8599726s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:09:57.171282    8768 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (12.86s)

                                                
                                    
x
+
TestFunctional/parallel/License (4.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (4.1128601s)
--- PASS: TestFunctional/parallel/License (4.14s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 version --short
--- PASS: TestFunctional/parallel/Version/short (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (9.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 version -o=json --components: (9.4660783s)
--- PASS: TestFunctional/parallel/Version/components (9.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (8.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 image ls --format short --alsologtostderr: (8.2566954s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-831100 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-831100
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-831100
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-831100 image ls --format short --alsologtostderr:
W0415 18:13:05.949697    6076 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0415 18:13:06.044156    6076 out.go:291] Setting OutFile to fd 792 ...
I0415 18:13:06.046780    6076 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 18:13:06.046780    6076 out.go:304] Setting ErrFile to fd 848...
I0415 18:13:06.046780    6076 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 18:13:06.071202    6076 config.go:182] Loaded profile config "functional-831100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 18:13:06.071202    6076 config.go:182] Loaded profile config "functional-831100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 18:13:06.072212    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-831100 ).state
I0415 18:13:08.514112    6076 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0415 18:13:08.514202    6076 main.go:141] libmachine: [stderr =====>] : 
I0415 18:13:08.531243    6076 ssh_runner.go:195] Run: systemctl --version
I0415 18:13:08.531243    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-831100 ).state
I0415 18:13:10.979623    6076 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0415 18:13:10.979680    6076 main.go:141] libmachine: [stderr =====>] : 
I0415 18:13:10.979680    6076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-831100 ).networkadapters[0]).ipaddresses[0]
I0415 18:13:13.858969    6076 main.go:141] libmachine: [stdout =====>] : 172.19.62.76

                                                
                                                
I0415 18:13:13.859085    6076 main.go:141] libmachine: [stderr =====>] : 
I0415 18:13:13.859959    6076 sshutil.go:53] new ssh client: &{IP:172.19.62.76 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-831100\id_rsa Username:docker}
I0415 18:13:13.969975    6076 ssh_runner.go:235] Completed: systemctl --version: (5.4386287s)
I0415 18:13:13.981974    6076 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (8.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (8.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 image ls --format table --alsologtostderr: (8.3918064s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-831100 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-831100 | 10ceed141c1a7 | 30B    |
| registry.k8s.io/kube-scheduler              | v1.29.3           | 8c390d98f50c0 | 59.6MB |
| docker.io/library/nginx                     | latest            | c613f16b66424 | 187MB  |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/localhost/my-image                | functional-831100 | 3a93c076fad44 | 1.24MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/google-containers/addon-resizer      | functional-831100 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/kube-apiserver              | v1.29.3           | 39f995c9f1996 | 127MB  |
| registry.k8s.io/kube-controller-manager     | v1.29.3           | 6052a25da3f97 | 122MB  |
| registry.k8s.io/kube-proxy                  | v1.29.3           | a1d263b5dc5b0 | 82.4MB |
| docker.io/library/nginx                     | alpine            | e289a478ace02 | 42.6MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-831100 image ls --format table --alsologtostderr:
W0415 18:13:35.634577    6124 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0415 18:13:35.721475    6124 out.go:291] Setting OutFile to fd 732 ...
I0415 18:13:35.737354    6124 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 18:13:35.737354    6124 out.go:304] Setting ErrFile to fd 672...
I0415 18:13:35.737581    6124 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 18:13:35.756643    6124 config.go:182] Loaded profile config "functional-831100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 18:13:35.757626    6124 config.go:182] Loaded profile config "functional-831100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 18:13:35.757950    6124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-831100 ).state
I0415 18:13:38.202021    6124 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0415 18:13:38.202716    6124 main.go:141] libmachine: [stderr =====>] : 
I0415 18:13:38.217182    6124 ssh_runner.go:195] Run: systemctl --version
I0415 18:13:38.217182    6124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-831100 ).state
I0415 18:13:40.794742    6124 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0415 18:13:40.795230    6124 main.go:141] libmachine: [stderr =====>] : 
I0415 18:13:40.795230    6124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-831100 ).networkadapters[0]).ipaddresses[0]
I0415 18:13:43.685998    6124 main.go:141] libmachine: [stdout =====>] : 172.19.62.76

                                                
                                                
I0415 18:13:43.685998    6124 main.go:141] libmachine: [stderr =====>] : 
I0415 18:13:43.686531    6124 sshutil.go:53] new ssh client: &{IP:172.19.62.76 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-831100\id_rsa Username:docker}
I0415 18:13:43.787427    6124 ssh_runner.go:235] Completed: systemctl --version: (5.5700658s)
I0415 18:13:43.798639    6124 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (8.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (8.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 image ls --format json --alsologtostderr: (8.3793044s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-831100 image ls --format json --alsologtostderr:
[{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.29.3"],"size":"122000000"},{"id":"a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.29.3"],"size":"82400000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da"
,"repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.29.3"],"size":"127000000"},{"id":"8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.29.3"],"size":"59600000"},{"id":"c613f16b664244b150d1c3644cbc387ec1fe8376377f9419992280eb4a82ff3b","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"10ceed141c1a7706fe30548dd9f16e4ff294cd62064933f21680f79e69346e90","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functi
onal-831100"],"size":"30"},{"id":"e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-831100"],"size":"32900000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-831100 image ls --format json --alsologtostderr:
W0415 18:13:33.399458   13752 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0415 18:13:33.490950   13752 out.go:291] Setting OutFile to fd 516 ...
I0415 18:13:33.492205   13752 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 18:13:33.492205   13752 out.go:304] Setting ErrFile to fd 960...
I0415 18:13:33.492313   13752 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 18:13:33.509231   13752 config.go:182] Loaded profile config "functional-831100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 18:13:33.510272   13752 config.go:182] Loaded profile config "functional-831100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 18:13:33.511892   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-831100 ).state
I0415 18:13:35.932533   13752 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0415 18:13:35.933459   13752 main.go:141] libmachine: [stderr =====>] : 
I0415 18:13:35.950268   13752 ssh_runner.go:195] Run: systemctl --version
I0415 18:13:35.950268   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-831100 ).state
I0415 18:13:38.360376   13752 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0415 18:13:38.360605   13752 main.go:141] libmachine: [stderr =====>] : 
I0415 18:13:38.360727   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-831100 ).networkadapters[0]).ipaddresses[0]
I0415 18:13:41.432092   13752 main.go:141] libmachine: [stdout =====>] : 172.19.62.76

                                                
                                                
I0415 18:13:41.432279   13752 main.go:141] libmachine: [stderr =====>] : 
I0415 18:13:41.434192   13752 sshutil.go:53] new ssh client: &{IP:172.19.62.76 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-831100\id_rsa Username:docker}
I0415 18:13:41.546934   13752 ssh_runner.go:235] Completed: systemctl --version: (5.5966201s)
I0415 18:13:41.562627   13752 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (8.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (8.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 image ls --format yaml --alsologtostderr: (8.3350227s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-831100 image ls --format yaml --alsologtostderr:
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.29.3
size: "82400000"
- id: c613f16b664244b150d1c3644cbc387ec1fe8376377f9419992280eb4a82ff3b
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.29.3
size: "122000000"
- id: 8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.29.3
size: "59600000"
- id: e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 10ceed141c1a7706fe30548dd9f16e4ff294cd62064933f21680f79e69346e90
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-831100
size: "30"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-831100
size: "32900000"
- id: 39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.29.3
size: "127000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-831100 image ls --format yaml --alsologtostderr:
W0415 18:13:14.192955    9632 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0415 18:13:14.299165    9632 out.go:291] Setting OutFile to fd 916 ...
I0415 18:13:14.299366    9632 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 18:13:14.299366    9632 out.go:304] Setting ErrFile to fd 848...
I0415 18:13:14.299366    9632 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 18:13:14.316221    9632 config.go:182] Loaded profile config "functional-831100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 18:13:14.317218    9632 config.go:182] Loaded profile config "functional-831100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 18:13:14.317218    9632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-831100 ).state
I0415 18:13:16.779120    9632 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0415 18:13:16.779120    9632 main.go:141] libmachine: [stderr =====>] : 
I0415 18:13:16.795334    9632 ssh_runner.go:195] Run: systemctl --version
I0415 18:13:16.795520    9632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-831100 ).state
I0415 18:13:19.283746    9632 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0415 18:13:19.283746    9632 main.go:141] libmachine: [stderr =====>] : 
I0415 18:13:19.283924    9632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-831100 ).networkadapters[0]).ipaddresses[0]
I0415 18:13:22.218770    9632 main.go:141] libmachine: [stdout =====>] : 172.19.62.76

                                                
                                                
I0415 18:13:22.219708    9632 main.go:141] libmachine: [stderr =====>] : 
I0415 18:13:22.219946    9632 sshutil.go:53] new ssh client: &{IP:172.19.62.76 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-831100\id_rsa Username:docker}
I0415 18:13:22.314565    9632 ssh_runner.go:235] Completed: systemctl --version: (5.5189998s)
I0415 18:13:22.326334    9632 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (8.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (29.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-831100 ssh pgrep buildkitd: exit status 1 (10.682761s)

                                                
                                                
** stderr ** 
	W0415 18:13:22.564225    8344 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 image build -t localhost/my-image:functional-831100 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 image build -t localhost/my-image:functional-831100 testdata\build --alsologtostderr: (10.8244222s)
functional_test.go:319: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-831100 image build -t localhost/my-image:functional-831100 testdata\build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in dda8190db2e1
---> Removed intermediate container dda8190db2e1
---> 02cd5c8e7648
Step 3/3 : ADD content.txt /
---> 3a93c076fad4
Successfully built 3a93c076fad4
Successfully tagged localhost/my-image:functional-831100
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-831100 image build -t localhost/my-image:functional-831100 testdata\build --alsologtostderr:
W0415 18:13:33.226691    5028 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0415 18:13:33.326970    5028 out.go:291] Setting OutFile to fd 920 ...
I0415 18:13:33.342722    5028 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 18:13:33.342722    5028 out.go:304] Setting ErrFile to fd 672...
I0415 18:13:33.342722    5028 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 18:13:33.362643    5028 config.go:182] Loaded profile config "functional-831100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 18:13:33.384455    5028 config.go:182] Loaded profile config "functional-831100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 18:13:33.384455    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-831100 ).state
I0415 18:13:35.817700    5028 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0415 18:13:35.817904    5028 main.go:141] libmachine: [stderr =====>] : 
I0415 18:13:35.832873    5028 ssh_runner.go:195] Run: systemctl --version
I0415 18:13:35.832873    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-831100 ).state
I0415 18:13:38.293795    5028 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0415 18:13:38.293795    5028 main.go:141] libmachine: [stderr =====>] : 
I0415 18:13:38.294104    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-831100 ).networkadapters[0]).ipaddresses[0]
I0415 18:13:41.253248    5028 main.go:141] libmachine: [stdout =====>] : 172.19.62.76

                                                
                                                
I0415 18:13:41.253248    5028 main.go:141] libmachine: [stderr =====>] : 
I0415 18:13:41.254045    5028 sshutil.go:53] new ssh client: &{IP:172.19.62.76 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-831100\id_rsa Username:docker}
I0415 18:13:41.370812    5028 ssh_runner.go:235] Completed: systemctl --version: (5.5378947s)
I0415 18:13:41.370914    5028 build_images.go:161] Building image from path: C:\Users\jenkins.minikube6\AppData\Local\Temp\build.175414215.tar
I0415 18:13:41.391468    5028 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0415 18:13:41.435115    5028 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.175414215.tar
I0415 18:13:41.445290    5028 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.175414215.tar: stat -c "%s %y" /var/lib/minikube/build/build.175414215.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.175414215.tar': No such file or directory
I0415 18:13:41.445633    5028 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\AppData\Local\Temp\build.175414215.tar --> /var/lib/minikube/build/build.175414215.tar (3072 bytes)
I0415 18:13:41.515786    5028 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.175414215
I0415 18:13:41.556122    5028 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.175414215 -xf /var/lib/minikube/build/build.175414215.tar
I0415 18:13:41.573600    5028 docker.go:360] Building image: /var/lib/minikube/build/build.175414215
I0415 18:13:41.585266    5028 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-831100 /var/lib/minikube/build/build.175414215
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0415 18:13:43.759960    5028 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-831100 /var/lib/minikube/build/build.175414215: (2.1746172s)
I0415 18:13:43.776243    5028 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.175414215
I0415 18:13:43.829216    5028 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.175414215.tar
I0415 18:13:43.860136    5028 build_images.go:217] Built localhost/my-image:functional-831100 from C:\Users\jenkins.minikube6\AppData\Local\Temp\build.175414215.tar
I0415 18:13:43.860277    5028 build_images.go:133] succeeded building to: functional-831100
I0415 18:13:43.860277    5028 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 image ls: (8.0549189s)
E0415 18:16:53.548026   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (29.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (4.6057252s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-831100
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (26.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 image load --daemon gcr.io/google-containers/addon-resizer:functional-831100 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 image load --daemon gcr.io/google-containers/addon-resizer:functional-831100 --alsologtostderr: (17.3882236s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 image ls: (9.0346197s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (26.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (12.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (12.0192148s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (12.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (23.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 image load --daemon gcr.io/google-containers/addon-resizer:functional-831100 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 image load --daemon gcr.io/google-containers/addon-resizer:functional-831100 --alsologtostderr: (14.5611129s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 image ls: (8.9430517s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (23.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (12.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (11.7961815s)
functional_test.go:1311: Took "11.7962382s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "269.679ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (12.07s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (12.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (11.9298817s)
functional_test.go:1362: Took "11.9306388s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "353.5026ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (12.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (31.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (4.5734003s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-831100
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 image load --daemon gcr.io/google-containers/addon-resizer:functional-831100 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 image load --daemon gcr.io/google-containers/addon-resizer:functional-831100 --alsologtostderr: (18.1124505s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 image ls: (8.4780247s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (31.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (10.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-831100 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-831100 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-831100 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 8240: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 13032: TerminateProcess: Access is denied.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-831100 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (10.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-831100 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-831100 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [72fe023c-986c-4d38-8450-211569826804] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [72fe023c-986c-4d38-8450-211569826804] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 15.0263151s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-831100 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 11368: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-831100 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-831100 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-rgsgt" [2aaf3169-8329-4556-b6f5-77b68092d33d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-rgsgt" [2aaf3169-8329-4556-b6f5-77b68092d33d] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.0185464s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (11.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 image save gcr.io/google-containers/addon-resizer:functional-831100 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 image save gcr.io/google-containers/addon-resizer:functional-831100 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (11.3904005s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (11.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (17.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 image rm gcr.io/google-containers/addon-resizer:functional-831100 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 image rm gcr.io/google-containers/addon-resizer:functional-831100 --alsologtostderr: (9.5302143s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 image ls: (8.3875332s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (17.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (14.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 service list
functional_test.go:1455: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 service list: (14.6578601s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (14.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (15.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 service list -o json: (15.2422835s)
functional_test.go:1490: Took "15.2427621s" to run "out/minikube-windows-amd64.exe -p functional-831100 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (15.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (21.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (11.652797s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 image ls: (9.9364643s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (21.59s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (51.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-831100 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-831100"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-831100 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-831100": (33.9057465s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-831100 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-831100 docker-env | Invoke-Expression ; docker images": (17.6345156s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (51.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (12.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-831100
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 image save --daemon gcr.io/google-containers/addon-resizer:functional-831100 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 image save --daemon gcr.io/google-containers/addon-resizer:functional-831100 --alsologtostderr: (11.5563322s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-831100
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (12.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 update-context --alsologtostderr -v=2: (2.7562566s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.76s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 update-context --alsologtostderr -v=2: (2.8408912s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.84s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-831100 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-831100 update-context --alsologtostderr -v=2: (2.786486s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.79s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.5s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-831100
--- PASS: TestFunctional/delete_addon-resizer_images (0.50s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.19s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-831100
--- PASS: TestFunctional/delete_my-image_image (0.19s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.2s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-831100
--- PASS: TestFunctional/delete_minikube_cached_images (0.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-653100 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.21s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (214.18s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-314000 --driver=hyperv
E0415 18:53:13.724000   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
E0415 18:55:10.523721   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-314000 --driver=hyperv: (3m34.1807393s)
--- PASS: TestImageBuild/serial/Setup (214.18s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (10.36s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-314000
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-314000: (10.3593911s)
--- PASS: TestImageBuild/serial/NormalBuild (10.36s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (9.8s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-314000
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-314000: (9.8033536s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (9.80s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (8.33s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-314000
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-314000: (8.3279021s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (8.33s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (8.13s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-314000
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-314000: (8.1308412s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (8.13s)

                                                
                                    
x
+
TestJSONOutput/start/Command (255.7s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-824800 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0415 18:58:16.786826   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
E0415 19:00:10.525474   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-824800 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (4m15.6939354s)
--- PASS: TestJSONOutput/start/Command (255.70s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (8.63s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-824800 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-824800 --output=json --user=testUser: (8.634055s)
--- PASS: TestJSONOutput/pause/Command (8.63s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (8.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-824800 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-824800 --output=json --user=testUser: (8.663127s)
--- PASS: TestJSONOutput/unpause/Command (8.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (36.81s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-824800 --output=json --user=testUser
E0415 19:01:53.570035   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-824800 --output=json --user=testUser: (36.8047676s)
--- PASS: TestJSONOutput/stop/Command (36.81s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.58s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-398200 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-398200 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (340.1059ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"58a340e1-1945-49bf-a50e-c98529e6534d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-398200] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fbc10f87-2a16-4a4f-a61d-6ee050fa4b49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube6\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"fb067dc5-7796-41e1-b9f9-eb67db233284","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"107f2db6-2b28-4a48-ad10-6191bc2cf1bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"0187d607-90d0-40ba-b571-db235a777ac3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18634"}}
	{"specversion":"1.0","id":"33204cdb-7d86-41f5-9026-f85250faa5cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"12c3d915-1296-47ed-90a0-39a1a94b224c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 19:02:26.616621    9040 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-398200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-398200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-398200: (1.242181s)
--- PASS: TestErrorJSONOutput (1.58s)

                                                
                                    
x
+
TestMainNoArgs (0.27s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.27s)

                                                
                                    
x
+
TestMinikubeProfile (562.59s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-321200 --driver=hyperv
E0415 19:05:10.518272   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-321200 --driver=hyperv: (3m31.5888027s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-321200 --driver=hyperv
E0415 19:06:53.579549   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-321200 --driver=hyperv: (3m33.9271502s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-321200
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
E0415 19:09:53.735817   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (20.5954218s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-321200
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
E0415 19:10:10.519850   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (20.5406066s)
helpers_test.go:175: Cleaning up "second-321200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-321200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-321200: (47.6850721s)
helpers_test.go:175: Cleaning up "first-321200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-321200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-321200: (47.2481651s)
--- PASS: TestMinikubeProfile (562.59s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (166.68s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-235400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0415 19:11:53.569384   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-235400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m45.6666309s)
--- PASS: TestMountStart/serial/StartWithMountFirst (166.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (10.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-235400 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-235400 ssh -- ls /minikube-host: (10.2620738s)
--- PASS: TestMountStart/serial/VerifyMountFirst (10.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (166.75s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-235400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0415 19:14:56.809406   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
E0415 19:15:10.526051   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
E0415 19:16:53.582964   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-235400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m45.742753s)
--- PASS: TestMountStart/serial/StartWithMountSecond (166.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (10.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-235400 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-235400 ssh -- ls /minikube-host: (10.2682818s)
--- PASS: TestMountStart/serial/VerifyMountSecond (10.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (29.19s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-235400 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-235400 --alsologtostderr -v=5: (29.1859357s)
--- PASS: TestMountStart/serial/DeleteFirst (29.19s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (10.16s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-235400 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-235400 ssh -- ls /minikube-host: (10.1628497s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (10.16s)

                                                
                                    
x
+
TestMountStart/serial/Stop (28.46s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-235400
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-235400: (28.4548432s)
--- PASS: TestMountStart/serial/Stop (28.46s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (126.57s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-235400
E0415 19:20:10.531764   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-235400: (2m5.5569709s)
--- PASS: TestMountStart/serial/RestartStopped (126.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (10.12s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-235400 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-235400 ssh -- ls /minikube-host: (10.1203561s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (10.12s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (450.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-841000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0415 19:21:53.582486   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
E0415 19:25:10.527929   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
E0415 19:26:33.754096   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
E0415 19:26:53.590422   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-841000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (7m5.0267435s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 status --alsologtostderr: (25.7797149s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (450.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-841000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-841000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-841000 -- rollout status deployment/busybox: (3.0288083s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-841000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-841000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-841000 -- exec busybox-7fdf7869d9-gkn8h -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-841000 -- exec busybox-7fdf7869d9-gkn8h -- nslookup kubernetes.io: (1.9381965s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-841000 -- exec busybox-7fdf7869d9-hfpk6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-841000 -- exec busybox-7fdf7869d9-gkn8h -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-841000 -- exec busybox-7fdf7869d9-hfpk6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-841000 -- exec busybox-7fdf7869d9-gkn8h -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-841000 -- exec busybox-7fdf7869d9-hfpk6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.43s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (247.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-841000 -v 3 --alsologtostderr
E0415 19:31:36.822712   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
E0415 19:31:53.585349   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-841000 -v 3 --alsologtostderr: (3m28.9732175s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 status --alsologtostderr: (38.6638065s)
--- PASS: TestMultiNode/serial/AddNode (247.64s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-841000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (10.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (10.5527332s)
--- PASS: TestMultiNode/serial/ProfileList (10.55s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (392.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 status --output json --alsologtostderr
E0415 19:35:10.546379   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 status --output json --alsologtostderr: (38.8587646s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 cp testdata\cp-test.txt multinode-841000:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 cp testdata\cp-test.txt multinode-841000:/home/docker/cp-test.txt: (10.2687293s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000 "sudo cat /home/docker/cp-test.txt": (10.1206041s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 cp multinode-841000:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2855924902\001\cp-test_multinode-841000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 cp multinode-841000:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2855924902\001\cp-test_multinode-841000.txt: (10.2174474s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000 "sudo cat /home/docker/cp-test.txt": (10.2515397s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 cp multinode-841000:/home/docker/cp-test.txt multinode-841000-m02:/home/docker/cp-test_multinode-841000_multinode-841000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 cp multinode-841000:/home/docker/cp-test.txt multinode-841000-m02:/home/docker/cp-test_multinode-841000_multinode-841000-m02.txt: (18.2042949s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000 "sudo cat /home/docker/cp-test.txt": (10.2145038s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000-m02 "sudo cat /home/docker/cp-test_multinode-841000_multinode-841000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000-m02 "sudo cat /home/docker/cp-test_multinode-841000_multinode-841000-m02.txt": (10.3435403s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 cp multinode-841000:/home/docker/cp-test.txt multinode-841000-m03:/home/docker/cp-test_multinode-841000_multinode-841000-m03.txt
E0415 19:36:53.583515   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 cp multinode-841000:/home/docker/cp-test.txt multinode-841000-m03:/home/docker/cp-test_multinode-841000_multinode-841000-m03.txt: (18.0269835s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000 "sudo cat /home/docker/cp-test.txt": (10.2054283s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000-m03 "sudo cat /home/docker/cp-test_multinode-841000_multinode-841000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000-m03 "sudo cat /home/docker/cp-test_multinode-841000_multinode-841000-m03.txt": (10.1995155s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 cp testdata\cp-test.txt multinode-841000-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 cp testdata\cp-test.txt multinode-841000-m02:/home/docker/cp-test.txt: (10.2432081s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000-m02 "sudo cat /home/docker/cp-test.txt": (10.2198188s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 cp multinode-841000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2855924902\001\cp-test_multinode-841000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 cp multinode-841000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2855924902\001\cp-test_multinode-841000-m02.txt: (10.2079376s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000-m02 "sudo cat /home/docker/cp-test.txt": (10.1816053s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 cp multinode-841000-m02:/home/docker/cp-test.txt multinode-841000:/home/docker/cp-test_multinode-841000-m02_multinode-841000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 cp multinode-841000-m02:/home/docker/cp-test.txt multinode-841000:/home/docker/cp-test_multinode-841000-m02_multinode-841000.txt: (17.8765101s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000-m02 "sudo cat /home/docker/cp-test.txt": (10.2900265s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000 "sudo cat /home/docker/cp-test_multinode-841000-m02_multinode-841000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000 "sudo cat /home/docker/cp-test_multinode-841000-m02_multinode-841000.txt": (10.3042898s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 cp multinode-841000-m02:/home/docker/cp-test.txt multinode-841000-m03:/home/docker/cp-test_multinode-841000-m02_multinode-841000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 cp multinode-841000-m02:/home/docker/cp-test.txt multinode-841000-m03:/home/docker/cp-test_multinode-841000-m02_multinode-841000-m03.txt: (17.921512s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000-m02 "sudo cat /home/docker/cp-test.txt": (10.2291343s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000-m03 "sudo cat /home/docker/cp-test_multinode-841000-m02_multinode-841000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000-m03 "sudo cat /home/docker/cp-test_multinode-841000-m02_multinode-841000-m03.txt": (10.2306058s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 cp testdata\cp-test.txt multinode-841000-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 cp testdata\cp-test.txt multinode-841000-m03:/home/docker/cp-test.txt: (10.2630556s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000-m03 "sudo cat /home/docker/cp-test.txt": (10.1527835s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 cp multinode-841000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2855924902\001\cp-test_multinode-841000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 cp multinode-841000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2855924902\001\cp-test_multinode-841000-m03.txt: (10.1295895s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000-m03 "sudo cat /home/docker/cp-test.txt": (10.2661795s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 cp multinode-841000-m03:/home/docker/cp-test.txt multinode-841000:/home/docker/cp-test_multinode-841000-m03_multinode-841000.txt
E0415 19:40:10.543296   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 cp multinode-841000-m03:/home/docker/cp-test.txt multinode-841000:/home/docker/cp-test_multinode-841000-m03_multinode-841000.txt: (17.7820053s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000-m03 "sudo cat /home/docker/cp-test.txt": (10.2348712s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000 "sudo cat /home/docker/cp-test_multinode-841000-m03_multinode-841000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000 "sudo cat /home/docker/cp-test_multinode-841000-m03_multinode-841000.txt": (10.2179247s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 cp multinode-841000-m03:/home/docker/cp-test.txt multinode-841000-m02:/home/docker/cp-test_multinode-841000-m03_multinode-841000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 cp multinode-841000-m03:/home/docker/cp-test.txt multinode-841000-m02:/home/docker/cp-test_multinode-841000-m03_multinode-841000-m02.txt: (17.8788531s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000-m03 "sudo cat /home/docker/cp-test.txt": (10.2685676s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000-m02 "sudo cat /home/docker/cp-test_multinode-841000-m03_multinode-841000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 ssh -n multinode-841000-m02 "sudo cat /home/docker/cp-test_multinode-841000-m03_multinode-841000-m02.txt": (10.2872148s)
--- PASS: TestMultiNode/serial/CopyFile (392.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (82.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-841000 node stop m03: (26.3194749s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 status
E0415 19:41:53.588995   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-841000 status: exit status 7 (28.3168266s)

                                                
                                                
-- stdout --
	multinode-841000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-841000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-841000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 19:41:36.827743    9004 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-841000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-841000 status --alsologtostderr: exit status 7 (27.9687432s)

                                                
                                                
-- stdout --
	multinode-841000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-841000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-841000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 19:42:05.128838    5856 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0415 19:42:05.220815    5856 out.go:291] Setting OutFile to fd 848 ...
	I0415 19:42:05.221822    5856 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 19:42:05.221822    5856 out.go:304] Setting ErrFile to fd 960...
	I0415 19:42:05.221822    5856 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 19:42:05.238032    5856 out.go:298] Setting JSON to false
	I0415 19:42:05.238032    5856 mustload.go:65] Loading cluster: multinode-841000
	I0415 19:42:05.238032    5856 notify.go:220] Checking for updates...
	I0415 19:42:05.238983    5856 config.go:182] Loaded profile config "multinode-841000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 19:42:05.238983    5856 status.go:255] checking status of multinode-841000 ...
	I0415 19:42:05.240080    5856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:42:07.572498    5856 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:42:07.573381    5856 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:42:07.573381    5856 status.go:330] multinode-841000 host status = "Running" (err=<nil>)
	I0415 19:42:07.573495    5856 host.go:66] Checking if "multinode-841000" exists ...
	I0415 19:42:07.574445    5856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:42:09.939722    5856 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:42:09.940518    5856 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:42:09.940577    5856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:42:12.701415    5856 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:42:12.701761    5856 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:42:12.701761    5856 host.go:66] Checking if "multinode-841000" exists ...
	I0415 19:42:12.717022    5856 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 19:42:12.717022    5856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000 ).state
	I0415 19:42:15.004471    5856 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:42:15.005095    5856 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:42:15.005181    5856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000 ).networkadapters[0]).ipaddresses[0]
	I0415 19:42:17.756789    5856 main.go:141] libmachine: [stdout =====>] : 172.19.62.237
	
	I0415 19:42:17.756789    5856 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:42:17.758018    5856 sshutil.go:53] new ssh client: &{IP:172.19.62.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000\id_rsa Username:docker}
	I0415 19:42:17.853544    5856 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.1364805s)
	I0415 19:42:17.867804    5856 ssh_runner.go:195] Run: systemctl --version
	I0415 19:42:17.891956    5856 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 19:42:17.919731    5856 kubeconfig.go:125] found "multinode-841000" server: "https://172.19.62.237:8443"
	I0415 19:42:17.919731    5856 api_server.go:166] Checking apiserver status ...
	I0415 19:42:17.935538    5856 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 19:42:17.981480    5856 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2019/cgroup
	W0415 19:42:18.002627    5856 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2019/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0415 19:42:18.016765    5856 ssh_runner.go:195] Run: ls
	I0415 19:42:18.023953    5856 api_server.go:253] Checking apiserver healthz at https://172.19.62.237:8443/healthz ...
	I0415 19:42:18.031974    5856 api_server.go:279] https://172.19.62.237:8443/healthz returned 200:
	ok
	I0415 19:42:18.031974    5856 status.go:422] multinode-841000 apiserver status = Running (err=<nil>)
	I0415 19:42:18.032185    5856 status.go:257] multinode-841000 status: &{Name:multinode-841000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 19:42:18.032238    5856 status.go:255] checking status of multinode-841000-m02 ...
	I0415 19:42:18.032331    5856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:42:20.358810    5856 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:42:20.358911    5856 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:42:20.358911    5856 status.go:330] multinode-841000-m02 host status = "Running" (err=<nil>)
	I0415 19:42:20.359050    5856 host.go:66] Checking if "multinode-841000-m02" exists ...
	I0415 19:42:20.359741    5856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:42:22.728190    5856 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:42:22.728190    5856 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:42:22.728190    5856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:42:25.481567    5856 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:42:25.481567    5856 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:42:25.481567    5856 host.go:66] Checking if "multinode-841000-m02" exists ...
	I0415 19:42:25.496024    5856 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 19:42:25.496024    5856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m02 ).state
	I0415 19:42:27.796770    5856 main.go:141] libmachine: [stdout =====>] : Running
	
	I0415 19:42:27.796770    5856 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:42:27.797197    5856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-841000-m02 ).networkadapters[0]).ipaddresses[0]
	I0415 19:42:30.522288    5856 main.go:141] libmachine: [stdout =====>] : 172.19.55.167
	
	I0415 19:42:30.522288    5856 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:42:30.523358    5856 sshutil.go:53] new ssh client: &{IP:172.19.55.167 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-841000-m02\id_rsa Username:docker}
	I0415 19:42:30.623112    5856 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.1262446s)
	I0415 19:42:30.637747    5856 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 19:42:30.662572    5856 status.go:257] multinode-841000-m02 status: &{Name:multinode-841000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0415 19:42:30.662572    5856 status.go:255] checking status of multinode-841000-m03 ...
	I0415 19:42:30.663359    5856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-841000-m03 ).state
	I0415 19:42:32.945047    5856 main.go:141] libmachine: [stdout =====>] : Off
	
	I0415 19:42:32.945047    5856 main.go:141] libmachine: [stderr =====>] : 
	I0415 19:42:32.945047    5856 status.go:330] multinode-841000-m03 host status = "Stopped" (err=<nil>)
	I0415 19:42:32.945047    5856 status.go:343] host is not running, skipping remaining checks
	I0415 19:42:32.945450    5856 status.go:257] multinode-841000-m03 status: &{Name:multinode-841000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (82.61s)

                                                
                                    
x
+
TestPreload (550.1s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-246900 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0415 19:55:10.551813   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
E0415 19:56:53.589409   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-246900 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m41.0832538s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-246900 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-246900 image pull gcr.io/k8s-minikube/busybox: (9.1242113s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-246900
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-246900: (42.252296s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-246900 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0415 19:59:53.785551   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
E0415 20:00:10.555720   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-246900 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m44.8850921s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-246900 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-246900 image list: (7.9576523s)
helpers_test.go:175: Cleaning up "test-preload-246900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-246900
E0415 20:01:53.601873   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-246900: (44.7982971s)
--- PASS: TestPreload (550.10s)

                                                
                                    
x
+
TestScheduledStopWindows (351.93s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-965200 --memory=2048 --driver=hyperv
E0415 20:04:56.847528   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
E0415 20:05:10.554589   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-965200 --memory=2048 --driver=hyperv: (3m33.7984908s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-965200 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-965200 --schedule 5m: (11.7747554s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-965200 -n scheduled-stop-965200
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-965200 -n scheduled-stop-965200: exit status 1 (10.0143586s)

                                                
                                                
** stderr ** 
	W0415 20:06:03.583226    1076 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-965200 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-965200 -- sudo systemctl show minikube-scheduled-stop --no-page: (10.3083761s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-965200 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-965200 --schedule 5s: (11.6780479s)
E0415 20:06:53.594459   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-965200
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-965200: exit status 7 (2.6274345s)

                                                
                                                
-- stdout --
	scheduled-stop-965200
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 20:07:35.580525    8504 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-965200 -n scheduled-stop-965200
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-965200 -n scheduled-stop-965200: exit status 7 (2.6658667s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 20:07:38.231421    6192 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-965200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-965200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-965200: (29.0582261s)
--- PASS: TestScheduledStopWindows (351.93s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (1012.51s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.625541798.exe start -p running-upgrade-560000 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.625541798.exe start -p running-upgrade-560000 --memory=2200 --vm-driver=hyperv: (6m30.6934002s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-560000 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0415 20:20:10.568510   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
E0415 20:21:36.866663   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-560000 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (9m12.8287769s)
helpers_test.go:175: Cleaning up "running-upgrade-560000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-560000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-560000: (1m7.3288648s)
--- PASS: TestRunningBinaryUpgrade (1012.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-993800 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-993800 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (413.8516ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-993800] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 20:08:09.962852   10656 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (937.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.2634873631.exe start -p stopped-upgrade-505200 --memory=2200 --vm-driver=hyperv
E0415 20:16:33.804449   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.2634873631.exe start -p stopped-upgrade-505200 --memory=2200 --vm-driver=hyperv: (8m18.1568151s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.2634873631.exe -p stopped-upgrade-505200 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.2634873631.exe -p stopped-upgrade-505200 stop: (38.6840252s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-505200 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0415 20:25:10.561487   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-831100\client.crt: The system cannot find the path specified.
E0415 20:26:53.606395   11272 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-961400\client.crt: The system cannot find the path specified.
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-505200 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (6m40.1663348s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (937.01s)

                                                
                                    
x
+
TestPause/serial/Start (574.02s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-639400 --memory=2048 --install-addons=false --wait=all --driver=hyperv
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-639400 --memory=2048 --install-addons=false --wait=all --driver=hyperv: (9m34.0193839s)
--- PASS: TestPause/serial/Start (574.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (11.38s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-505200
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-505200: (11.3820174s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (11.38s)

                                                
                                    

Test skip (32/208)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-831100 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-831100 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 6052: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-831100 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-831100 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0285988s)

                                                
                                                
-- stdout --
	* [functional-831100] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:12:49.533872   10720 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0415 18:12:49.643860   10720 out.go:291] Setting OutFile to fd 856 ...
	I0415 18:12:49.643860   10720 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:12:49.644954   10720 out.go:304] Setting ErrFile to fd 640...
	I0415 18:12:49.644954   10720 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:12:49.673866   10720 out.go:298] Setting JSON to false
	I0415 18:12:49.680868   10720 start.go:129] hostinfo: {"hostname":"minikube6","uptime":16496,"bootTime":1713188273,"procs":200,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0415 18:12:49.680868   10720 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 18:12:49.686892   10720 out.go:177] * [functional-831100] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0415 18:12:49.688873   10720 notify.go:220] Checking for updates...
	I0415 18:12:49.691868   10720 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 18:12:49.694862   10720 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 18:12:49.697892   10720 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0415 18:12:49.702903   10720 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 18:12:49.705855   10720 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 18:12:49.709872   10720 config.go:182] Loaded profile config "functional-831100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:12:49.710871   10720 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-831100 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-831100 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0398195s)

                                                
                                                
-- stdout --
	* [functional-831100] minikube v1.33.0-beta.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:12:03.526657   13924 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0415 18:12:03.643613   13924 out.go:291] Setting OutFile to fd 1004 ...
	I0415 18:12:03.644675   13924 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:12:03.644675   13924 out.go:304] Setting ErrFile to fd 1008...
	I0415 18:12:03.645244   13924 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:12:03.671896   13924 out.go:298] Setting JSON to false
	I0415 18:12:03.677910   13924 start.go:129] hostinfo: {"hostname":"minikube6","uptime":16450,"bootTime":1713188273,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0415 18:12:03.677910   13924 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 18:12:03.684977   13924 out.go:177] * [functional-831100] minikube v1.33.0-beta.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0415 18:12:03.687690   13924 notify.go:220] Checking for updates...
	I0415 18:12:03.690460   13924 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0415 18:12:03.713991   13924 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 18:12:03.716802   13924 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0415 18:12:03.719590   13924 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 18:12:03.725194   13924 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 18:12:03.730065   13924 config.go:182] Loaded profile config "functional-831100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:12:03.731666   13924 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard