Test Report: Hyper-V_Windows 21512

                    
                      67b6671f4b7f755dd397ae36ae992d15d1f5bc42:2025-09-08:41332
                    
                

Test fail (11/208)

x
+
TestErrorSpam/setup (189.81s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-404300 -n=1 --memory=3072 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-404300 --driver=hyperv
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-404300 -n=1 --memory=3072 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-404300 --driver=hyperv: (3m9.8041495s)
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube VM"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-404300] minikube v1.36.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
- KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
- MINIKUBE_LOCATION=21512
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-404300" primary control-plane node in "nospam-404300" cluster
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-404300" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (189.81s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-264100 service --namespace=default --https --url hello-node: exit status 1 (15.0420191s)
functional_test.go:1521: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-264100 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-264100 service hello-node --url --format={{.IP}}: exit status 1 (15.0103418s)
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-264100 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1558: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-264100 service hello-node --url: exit status 1 (15.010083s)
functional_test.go:1571: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-264100 service hello-node --url": exit status 1
functional_test.go:1575: found endpoint for hello-node: 
functional_test.go:1583: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (68.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 kubectl -- exec busybox-7b57f96db7-2wjzs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 kubectl -- exec busybox-7b57f96db7-2wjzs -- sh -c "ping -c 1 172.20.48.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-331000 kubectl -- exec busybox-7b57f96db7-2wjzs -- sh -c "ping -c 1 172.20.48.1": exit status 1 (10.5380938s)

                                                
                                                
-- stdout --
	PING 172.20.48.1 (172.20.48.1): 56 data bytes
	
	--- 172.20.48.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.20.48.1) from pod (busybox-7b57f96db7-2wjzs): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 kubectl -- exec busybox-7b57f96db7-9vn9f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 kubectl -- exec busybox-7b57f96db7-9vn9f -- sh -c "ping -c 1 172.20.48.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-331000 kubectl -- exec busybox-7b57f96db7-9vn9f -- sh -c "ping -c 1 172.20.48.1": exit status 1 (10.5171189s)

                                                
                                                
-- stdout --
	PING 172.20.48.1 (172.20.48.1): 56 data bytes
	
	--- 172.20.48.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.20.48.1) from pod (busybox-7b57f96db7-9vn9f): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 kubectl -- exec busybox-7b57f96db7-qhn4b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 kubectl -- exec busybox-7b57f96db7-qhn4b -- sh -c "ping -c 1 172.20.48.1"
E0908 11:25:15.266840   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-331000 kubectl -- exec busybox-7b57f96db7-qhn4b -- sh -c "ping -c 1 172.20.48.1": exit status 1 (10.4938135s)

                                                
                                                
-- stdout --
	PING 172.20.48.1 (172.20.48.1): 56 data bytes
	
	--- 172.20.48.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.20.48.1) from pod (busybox-7b57f96db7-qhn4b): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-331000 -n ha-331000
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-331000 -n ha-331000: (12.339305s)
helpers_test.go:252: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 logs -n 25: (8.6322921s)
helpers_test.go:260: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-264100 image ls --format table --alsologtostderr                                                               │ functional-264100 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:09 UTC │ 08 Sep 25 11:09 UTC │
	│ image   │ functional-264100 image build -t localhost/my-image:functional-264100 testdata\build --alsologtostderr                    │ functional-264100 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:09 UTC │ 08 Sep 25 11:09 UTC │
	│ image   │ functional-264100 image ls                                                                                                │ functional-264100 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:09 UTC │ 08 Sep 25 11:09 UTC │
	│ delete  │ -p functional-264100                                                                                                      │ functional-264100 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:12 UTC │
	│ start   │ ha-331000 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=hyperv                                     │ ha-331000         │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:12 UTC │ 08 Sep 25 11:23 UTC │
	│ kubectl │ ha-331000 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                          │ ha-331000         │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:24 UTC │ 08 Sep 25 11:24 UTC │
	│ kubectl │ ha-331000 kubectl -- rollout status deployment/busybox                                                                    │ ha-331000         │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:24 UTC │ 08 Sep 25 11:24 UTC │
	│ kubectl │ ha-331000 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-331000         │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:24 UTC │ 08 Sep 25 11:24 UTC │
	│ kubectl │ ha-331000 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                                     │ ha-331000         │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:24 UTC │ 08 Sep 25 11:24 UTC │
	│ kubectl │ ha-331000 kubectl -- exec busybox-7b57f96db7-2wjzs -- nslookup kubernetes.io                                              │ ha-331000         │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:24 UTC │ 08 Sep 25 11:24 UTC │
	│ kubectl │ ha-331000 kubectl -- exec busybox-7b57f96db7-9vn9f -- nslookup kubernetes.io                                              │ ha-331000         │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:24 UTC │ 08 Sep 25 11:24 UTC │
	│ kubectl │ ha-331000 kubectl -- exec busybox-7b57f96db7-qhn4b -- nslookup kubernetes.io                                              │ ha-331000         │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:24 UTC │ 08 Sep 25 11:24 UTC │
	│ kubectl │ ha-331000 kubectl -- exec busybox-7b57f96db7-2wjzs -- nslookup kubernetes.default                                         │ ha-331000         │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:24 UTC │ 08 Sep 25 11:24 UTC │
	│ kubectl │ ha-331000 kubectl -- exec busybox-7b57f96db7-9vn9f -- nslookup kubernetes.default                                         │ ha-331000         │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:24 UTC │ 08 Sep 25 11:24 UTC │
	│ kubectl │ ha-331000 kubectl -- exec busybox-7b57f96db7-qhn4b -- nslookup kubernetes.default                                         │ ha-331000         │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:24 UTC │ 08 Sep 25 11:24 UTC │
	│ kubectl │ ha-331000 kubectl -- exec busybox-7b57f96db7-2wjzs -- nslookup kubernetes.default.svc.cluster.local                       │ ha-331000         │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:24 UTC │ 08 Sep 25 11:24 UTC │
	│ kubectl │ ha-331000 kubectl -- exec busybox-7b57f96db7-9vn9f -- nslookup kubernetes.default.svc.cluster.local                       │ ha-331000         │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:24 UTC │ 08 Sep 25 11:24 UTC │
	│ kubectl │ ha-331000 kubectl -- exec busybox-7b57f96db7-qhn4b -- nslookup kubernetes.default.svc.cluster.local                       │ ha-331000         │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:24 UTC │ 08 Sep 25 11:24 UTC │
	│ kubectl │ ha-331000 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                                     │ ha-331000         │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:24 UTC │ 08 Sep 25 11:24 UTC │
	│ kubectl │ ha-331000 kubectl -- exec busybox-7b57f96db7-2wjzs -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ ha-331000         │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:24 UTC │ 08 Sep 25 11:24 UTC │
	│ kubectl │ ha-331000 kubectl -- exec busybox-7b57f96db7-2wjzs -- sh -c ping -c 1 172.20.48.1                                         │ ha-331000         │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:24 UTC │                     │
	│ kubectl │ ha-331000 kubectl -- exec busybox-7b57f96db7-9vn9f -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ ha-331000         │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:25 UTC │ 08 Sep 25 11:25 UTC │
	│ kubectl │ ha-331000 kubectl -- exec busybox-7b57f96db7-9vn9f -- sh -c ping -c 1 172.20.48.1                                         │ ha-331000         │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:25 UTC │                     │
	│ kubectl │ ha-331000 kubectl -- exec busybox-7b57f96db7-qhn4b -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ ha-331000         │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:25 UTC │ 08 Sep 25 11:25 UTC │
	│ kubectl │ ha-331000 kubectl -- exec busybox-7b57f96db7-qhn4b -- sh -c ping -c 1 172.20.48.1                                         │ ha-331000         │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 11:12:30
	Running on machine: minikube6
	Binary: Built with gc go1.24.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 11:12:30.357793    9032 out.go:360] Setting OutFile to fd 1616 ...
	I0908 11:12:30.428709    9032 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:12:30.428709    9032 out.go:374] Setting ErrFile to fd 1280...
	I0908 11:12:30.428709    9032 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:12:30.446928    9032 out.go:368] Setting JSON to false
	I0908 11:12:30.450308    9032 start.go:130] hostinfo: {"hostname":"minikube6","uptime":298802,"bootTime":1757031148,"procs":181,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6282 Build 19045.6282","kernelVersion":"10.0.19045.6282 Build 19045.6282","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0908 11:12:30.450503    9032 start.go:138] gopshost.Virtualization returned error: not implemented yet
	I0908 11:12:30.457055    9032 out.go:179] * [ha-331000] minikube v1.36.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
	I0908 11:12:30.459803    9032 notify.go:220] Checking for updates...
	I0908 11:12:30.461787    9032 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0908 11:12:30.463881    9032 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 11:12:30.466843    9032 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0908 11:12:30.469654    9032 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 11:12:30.474812    9032 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 11:12:30.478251    9032 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 11:12:35.707222    9032 out.go:179] * Using the hyperv driver based on user configuration
	I0908 11:12:35.711183    9032 start.go:304] selected driver: hyperv
	I0908 11:12:35.711183    9032 start.go:918] validating driver "hyperv" against <nil>
	I0908 11:12:35.711183    9032 start.go:929] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 11:12:35.761304    9032 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 11:12:35.762253    9032 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 11:12:35.762253    9032 cni.go:84] Creating CNI manager for ""
	I0908 11:12:35.762253    9032 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0908 11:12:35.762253    9032 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0908 11:12:35.762253    9032 start.go:348] cluster config:
	{Name:ha-331000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-331000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Auto
PauseInterval:1m0s}
	I0908 11:12:35.763216    9032 iso.go:125] acquiring lock: {Name:mk0c8af595f03ef7f7ea249099688f084dfd77f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 11:12:35.767916    9032 out.go:179] * Starting "ha-331000" primary control-plane node in "ha-331000" cluster
	I0908 11:12:35.771960    9032 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0908 11:12:35.772244    9032 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0908 11:12:35.772244    9032 cache.go:58] Caching tarball of preloaded images
	I0908 11:12:35.772244    9032 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0908 11:12:35.772244    9032 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0908 11:12:35.773556    9032 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\config.json ...
	I0908 11:12:35.773870    9032 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\config.json: {Name:mk2586e434fbc41bf6cf75af480ab2fbb9c74b39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:12:35.774597    9032 start.go:360] acquireMachinesLock for ha-331000: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0908 11:12:35.775275    9032 start.go:364] duration metric: took 150.4µs to acquireMachinesLock for "ha-331000"
	I0908 11:12:35.775275    9032 start.go:93] Provisioning new machine with config: &{Name:ha-331000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.34.0 ClusterName:ha-331000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0908 11:12:35.775275    9032 start.go:125] createHost starting for "" (driver="hyperv")
	I0908 11:12:35.779110    9032 out.go:252] * Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0908 11:12:35.780306    9032 start.go:159] libmachine.API.Create for "ha-331000" (driver="hyperv")
	I0908 11:12:35.780306    9032 client.go:168] LocalClient.Create starting
	I0908 11:12:35.780482    9032 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0908 11:12:35.781329    9032 main.go:141] libmachine: Decoding PEM data...
	I0908 11:12:35.781329    9032 main.go:141] libmachine: Parsing certificate...
	I0908 11:12:35.781538    9032 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0908 11:12:35.781538    9032 main.go:141] libmachine: Decoding PEM data...
	I0908 11:12:35.781538    9032 main.go:141] libmachine: Parsing certificate...
	I0908 11:12:35.782068    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0908 11:12:37.794131    9032 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0908 11:12:37.794131    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:12:37.794214    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0908 11:12:39.499693    9032 main.go:141] libmachine: [stdout =====>] : False
	
	I0908 11:12:39.499918    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:12:39.500011    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0908 11:12:40.973413    9032 main.go:141] libmachine: [stdout =====>] : True
	
	I0908 11:12:40.973413    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:12:40.973656    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0908 11:12:44.568633    9032 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0908 11:12:44.569720    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:12:44.572135    9032 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso...
	I0908 11:12:45.147882    9032 main.go:141] libmachine: Creating SSH key...
	I0908 11:12:45.210329    9032 main.go:141] libmachine: Creating VM...
	I0908 11:12:45.210329    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0908 11:12:47.920599    9032 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0908 11:12:47.921044    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:12:47.921044    9032 main.go:141] libmachine: Using switch "Default Switch"
	I0908 11:12:47.921044    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0908 11:12:49.632652    9032 main.go:141] libmachine: [stdout =====>] : True
	
	I0908 11:12:49.633427    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:12:49.633427    9032 main.go:141] libmachine: Creating VHD
	I0908 11:12:49.633427    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0908 11:12:53.113222    9032 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 9F7591C7-D83B-4330-B73D-372ADE94B7E3
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0908 11:12:53.113848    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:12:53.113848    9032 main.go:141] libmachine: Writing magic tar header
	I0908 11:12:53.113848    9032 main.go:141] libmachine: Writing SSH key tar header
	I0908 11:12:53.129508    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0908 11:12:56.218026    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:12:56.219009    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:12:56.219252    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\disk.vhd' -SizeBytes 20000MB
	I0908 11:12:58.649027    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:12:58.649027    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:12:58.649814    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-331000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000' -SwitchName 'Default Switch' -MemoryStartupBytes 3072MB
	I0908 11:13:02.127811    9032 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-331000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0908 11:13:02.128517    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:02.128517    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-331000 -DynamicMemoryEnabled $false
	I0908 11:13:04.284275    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:13:04.284988    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:04.284988    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-331000 -Count 2
	I0908 11:13:06.363761    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:13:06.363761    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:06.363761    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-331000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\boot2docker.iso'
	I0908 11:13:08.923348    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:13:08.923988    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:08.924092    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-331000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\disk.vhd'
	I0908 11:13:11.480526    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:13:11.481619    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:11.481619    9032 main.go:141] libmachine: Starting VM...
	I0908 11:13:11.481672    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-331000
	I0908 11:13:14.644207    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:13:14.644207    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:14.645214    9032 main.go:141] libmachine: Waiting for host to start...
	I0908 11:13:14.645214    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:13:16.794568    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:13:16.794923    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:16.794923    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:13:19.268251    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:13:19.269387    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:20.270231    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:13:22.369013    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:13:22.369013    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:22.369013    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:13:24.853789    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:13:24.853789    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:25.854667    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:13:28.072373    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:13:28.073317    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:28.073404    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:13:30.559623    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:13:30.559661    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:31.560086    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:13:33.724043    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:13:33.724043    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:33.725005    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:13:36.226677    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:13:36.226677    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:37.227933    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:13:39.430834    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:13:39.430834    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:39.430913    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:13:41.936590    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:13:41.936590    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:41.936590    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:13:44.060700    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:13:44.060700    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:44.060700    9032 machine.go:93] provisionDockerMachine start ...
	I0908 11:13:44.060700    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:13:46.178026    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:13:46.179157    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:46.179268    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:13:48.627344    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:13:48.627805    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:48.633487    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:13:48.650210    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.59.73 22 <nil> <nil>}
	I0908 11:13:48.650210    9032 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 11:13:48.784728    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0908 11:13:48.784728    9032 buildroot.go:166] provisioning hostname "ha-331000"
	I0908 11:13:48.784969    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:13:50.777190    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:13:50.778228    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:50.778228    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:13:53.138814    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:13:53.139060    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:53.144255    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:13:53.144834    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.59.73 22 <nil> <nil>}
	I0908 11:13:53.144834    9032 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-331000 && echo "ha-331000" | sudo tee /etc/hostname
	I0908 11:13:53.304213    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-331000
	
	I0908 11:13:53.304213    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:13:55.327227    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:13:55.327496    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:55.327496    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:13:57.748087    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:13:57.748087    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:57.754180    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:13:57.754180    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.59.73 22 <nil> <nil>}
	I0908 11:13:57.754717    9032 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-331000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-331000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-331000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 11:13:57.909311    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 11:13:57.909311    9032 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0908 11:13:57.909311    9032 buildroot.go:174] setting up certificates
	I0908 11:13:57.909311    9032 provision.go:84] configureAuth start
	I0908 11:13:57.909909    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:13:59.883485    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:13:59.883485    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:59.884028    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:14:02.341169    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:14:02.341793    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:02.341892    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:14:04.364727    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:14:04.364727    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:04.364973    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:14:06.784973    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:14:06.785922    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:06.786086    9032 provision.go:143] copyHostCerts
	I0908 11:14:06.786250    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0908 11:14:06.786700    9032 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0908 11:14:06.786782    9032 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0908 11:14:06.787245    9032 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0908 11:14:06.788950    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0908 11:14:06.789369    9032 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0908 11:14:06.789369    9032 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0908 11:14:06.789744    9032 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0908 11:14:06.790798    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0908 11:14:06.790798    9032 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0908 11:14:06.790798    9032 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0908 11:14:06.791625    9032 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1671 bytes)
	I0908 11:14:06.792987    9032 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-331000 san=[127.0.0.1 172.20.59.73 ha-331000 localhost minikube]
	I0908 11:14:06.981248    9032 provision.go:177] copyRemoteCerts
	I0908 11:14:06.990498    9032 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 11:14:06.991519    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:14:08.990439    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:14:08.990439    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:08.990439    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:14:11.512635    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:14:11.512635    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:11.513413    9032 sshutil.go:53] new ssh client: &{IP:172.20.59.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\id_rsa Username:docker}
	I0908 11:14:11.633154    9032 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6425979s)
	I0908 11:14:11.633154    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0908 11:14:11.633699    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 11:14:11.683791    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0908 11:14:11.683791    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0908 11:14:11.737610    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0908 11:14:11.737885    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0908 11:14:11.796087    9032 provision.go:87] duration metric: took 13.8866022s to configureAuth
	I0908 11:14:11.796200    9032 buildroot.go:189] setting minikube options for container-runtime
	I0908 11:14:11.796903    9032 config.go:182] Loaded profile config "ha-331000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 11:14:11.797082    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:14:13.938181    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:14:13.938181    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:13.938181    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:14:16.302261    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:14:16.302261    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:16.308708    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:14:16.308882    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.59.73 22 <nil> <nil>}
	I0908 11:14:16.308882    9032 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0908 11:14:16.441323    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0908 11:14:16.441370    9032 buildroot.go:70] root file system type: tmpfs
	I0908 11:14:16.441607    9032 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0908 11:14:16.441758    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:14:18.487696    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:14:18.487696    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:18.488523    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:14:20.936929    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:14:20.937053    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:20.942186    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:14:20.942908    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.59.73 22 <nil> <nil>}
	I0908 11:14:20.943511    9032 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0908 11:14:21.099174    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0908 11:14:21.099329    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:14:23.141580    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:14:23.141669    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:23.141669    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:14:25.544188    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:14:25.544188    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:25.551246    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:14:25.551246    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.59.73 22 <nil> <nil>}
	I0908 11:14:25.551246    9032 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0908 11:14:26.900965    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I0908 11:14:26.900965    9032 machine.go:96] duration metric: took 42.8397288s to provisionDockerMachine
	I0908 11:14:26.900965    9032 client.go:171] duration metric: took 1m51.1192697s to LocalClient.Create
	I0908 11:14:26.900965    9032 start.go:167] duration metric: took 1m51.1192697s to libmachine.API.Create "ha-331000"
	I0908 11:14:26.900965    9032 start.go:293] postStartSetup for "ha-331000" (driver="hyperv")
	I0908 11:14:26.900965    9032 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 11:14:26.913952    9032 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 11:14:26.913952    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:14:29.028020    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:14:29.028232    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:29.028312    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:14:31.462100    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:14:31.462100    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:31.462646    9032 sshutil.go:53] new ssh client: &{IP:172.20.59.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\id_rsa Username:docker}
	I0908 11:14:31.566214    9032 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6522042s)
	I0908 11:14:31.577383    9032 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 11:14:31.584290    9032 info.go:137] Remote host: Buildroot 2025.02
	I0908 11:14:31.584290    9032 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0908 11:14:31.584290    9032 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0908 11:14:31.585666    9032 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> 116282.pem in /etc/ssl/certs
	I0908 11:14:31.585733    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> /etc/ssl/certs/116282.pem
	I0908 11:14:31.595280    9032 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 11:14:31.615921    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem --> /etc/ssl/certs/116282.pem (1708 bytes)
	I0908 11:14:31.672755    9032 start.go:296] duration metric: took 4.7717302s for postStartSetup
	I0908 11:14:31.676077    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:14:33.710199    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:14:33.710847    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:33.710847    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:14:36.133027    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:14:36.133962    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:36.134121    9032 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\config.json ...
	I0908 11:14:36.136753    9032 start.go:128] duration metric: took 2m0.3599727s to createHost
	I0908 11:14:36.136753    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:14:38.193042    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:14:38.194125    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:38.194125    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:14:40.655340    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:14:40.656290    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:40.662420    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:14:40.663140    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.59.73 22 <nil> <nil>}
	I0908 11:14:40.663140    9032 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0908 11:14:40.786102    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757330080.780655954
	
	I0908 11:14:40.786102    9032 fix.go:216] guest clock: 1757330080.780655954
	I0908 11:14:40.786176    9032 fix.go:229] Guest: 2025-09-08 11:14:40.780655954 +0000 UTC Remote: 2025-09-08 11:14:36.1367531 +0000 UTC m=+125.870517401 (delta=4.643902854s)
	I0908 11:14:40.786244    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:14:42.832697    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:14:42.833212    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:42.833212    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:14:45.235582    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:14:45.235781    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:45.240952    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:14:45.241720    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.59.73 22 <nil> <nil>}
	I0908 11:14:45.241720    9032 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1757330080
	I0908 11:14:45.385170    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Sep  8 11:14:40 UTC 2025
	
	I0908 11:14:45.385170    9032 fix.go:236] clock set: Mon Sep  8 11:14:40 UTC 2025
	 (err=<nil>)
	I0908 11:14:45.385170    9032 start.go:83] releasing machines lock for "ha-331000", held for 2m9.6082739s
	I0908 11:14:45.385170    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:14:47.356280    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:14:47.356280    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:47.356598    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:14:49.796446    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:14:49.796446    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:49.800867    9032 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0908 11:14:49.800953    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:14:49.809832    9032 ssh_runner.go:195] Run: cat /version.json
	I0908 11:14:49.809832    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:14:51.902233    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:14:51.903315    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:51.902233    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:14:51.903368    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:51.903368    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:14:51.903585    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:14:54.396627    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:14:54.396627    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:54.397005    9032 sshutil.go:53] new ssh client: &{IP:172.20.59.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\id_rsa Username:docker}
	I0908 11:14:54.459348    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:14:54.459348    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:54.460488    9032 sshutil.go:53] new ssh client: &{IP:172.20.59.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\id_rsa Username:docker}
	I0908 11:14:54.497508    9032 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.6964582s)
	W0908 11:14:54.497705    9032 start.go:868] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0908 11:14:54.563396    9032 ssh_runner.go:235] Completed: cat /version.json: (4.7535048s)
	I0908 11:14:54.574840    9032 ssh_runner.go:195] Run: systemctl --version
	I0908 11:14:54.596926    9032 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0908 11:14:54.606771    9032 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	W0908 11:14:54.614314    9032 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0908 11:14:54.614314    9032 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0908 11:14:54.618587    9032 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 11:14:54.654199    9032 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0908 11:14:54.654199    9032 start.go:495] detecting cgroup driver to use...
	I0908 11:14:54.654654    9032 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 11:14:54.705501    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0908 11:14:54.747481    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0908 11:14:54.773239    9032 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0908 11:14:54.783108    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0908 11:14:54.817477    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 11:14:54.851996    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0908 11:14:54.882480    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 11:14:54.913953    9032 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 11:14:54.946092    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0908 11:14:54.978328    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0908 11:14:55.009026    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0908 11:14:55.064253    9032 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 11:14:55.083335    9032 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0908 11:14:55.094919    9032 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0908 11:14:55.127350    9032 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 11:14:55.154329    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:14:55.368501    9032 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0908 11:14:55.429500    9032 start.go:495] detecting cgroup driver to use...
	I0908 11:14:55.442384    9032 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0908 11:14:55.482716    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 11:14:55.513921    9032 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 11:14:55.552707    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 11:14:55.587291    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 11:14:55.623993    9032 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0908 11:14:55.685999    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 11:14:55.711198    9032 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 11:14:55.761054    9032 ssh_runner.go:195] Run: which cri-dockerd
	I0908 11:14:55.778018    9032 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0908 11:14:55.797312    9032 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0908 11:14:55.846262    9032 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0908 11:14:56.059445    9032 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0908 11:14:56.258500    9032 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0908 11:14:56.258500    9032 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0908 11:14:56.305704    9032 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0908 11:14:56.342289    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:14:56.554618    9032 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0908 11:14:57.251201    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 11:14:57.288326    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0908 11:14:57.322325    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0908 11:14:57.355090    9032 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0908 11:14:57.596003    9032 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0908 11:14:57.812272    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:14:58.021044    9032 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0908 11:14:58.081993    9032 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0908 11:14:58.115113    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:14:58.360825    9032 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0908 11:14:58.524018    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0908 11:14:58.553032    9032 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0908 11:14:58.563670    9032 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0908 11:14:58.572810    9032 start.go:563] Will wait 60s for crictl version
	I0908 11:14:58.584409    9032 ssh_runner.go:195] Run: which crictl
	I0908 11:14:58.601621    9032 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 11:14:58.662009    9032 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0908 11:14:58.672918    9032 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0908 11:14:58.717816    9032 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0908 11:14:58.753604    9032 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0908 11:14:58.753716    9032 ip.go:180] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0908 11:14:58.758585    9032 ip.go:194] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0908 11:14:58.758585    9032 ip.go:194] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0908 11:14:58.758585    9032 ip.go:189] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0908 11:14:58.758585    9032 ip.go:215] Found interface: {Index:17 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4f:5e:c2 Flags:up|broadcast|multicast|running}
	I0908 11:14:58.760889    9032 ip.go:218] interface addr: fe80::a43d:dd17:5b4e:e872/64
	I0908 11:14:58.760889    9032 ip.go:218] interface addr: 172.20.48.1/20
	I0908 11:14:58.769265    9032 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0908 11:14:58.776058    9032 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 11:14:58.811491    9032 kubeadm.go:875] updating cluster {Name:ha-331000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0
ClusterName:ha-331000 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.59.73 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 11:14:58.812276    9032 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0908 11:14:58.821216    9032 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0908 11:14:58.845022    9032 docker.go:691] Got preloaded images: 
	I0908 11:14:58.845094    9032 docker.go:697] registry.k8s.io/kube-apiserver:v1.34.0 wasn't preloaded
	I0908 11:14:58.857533    9032 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0908 11:14:58.886072    9032 ssh_runner.go:195] Run: which lz4
	I0908 11:14:58.893676    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0908 11:14:58.904310    9032 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0908 11:14:58.911863    9032 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0908 11:14:58.911974    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (353447550 bytes)
	I0908 11:15:00.932916    9032 docker.go:655] duration metric: took 2.0388919s to copy over tarball
	I0908 11:15:00.943143    9032 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0908 11:15:09.925206    9032 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.9818945s)
	I0908 11:15:09.925287    9032 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0908 11:15:09.988549    9032 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0908 11:15:10.007523    9032 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2632 bytes)
	I0908 11:15:10.057029    9032 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0908 11:15:10.094996    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:15:10.329720    9032 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0908 11:15:11.785241    9032 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4553761s)
	I0908 11:15:11.793894    9032 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0908 11:15:11.825346    9032 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0908 11:15:11.825489    9032 cache_images.go:85] Images are preloaded, skipping loading
	I0908 11:15:11.825489    9032 kubeadm.go:926] updating node { 172.20.59.73 8443 v1.34.0 docker true true} ...
	I0908 11:15:11.825772    9032 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-331000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.59.73
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-331000 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 11:15:11.835298    9032 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0908 11:15:11.904022    9032 cni.go:84] Creating CNI manager for ""
	I0908 11:15:11.904128    9032 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0908 11:15:11.904184    9032 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 11:15:11.904184    9032 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.59.73 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-331000 NodeName:ha-331000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.59.73"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.59.73 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 11:15:11.904528    9032 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.59.73
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-331000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.20.59.73"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.59.73"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 11:15:11.904614    9032 kube-vip.go:115] generating kube-vip config ...
	I0908 11:15:11.915931    9032 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0908 11:15:11.947488    9032 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0908 11:15:11.947702    9032 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.20.63.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0908 11:15:11.959081    9032 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 11:15:11.978893    9032 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 11:15:11.989559    9032 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0908 11:15:12.007474    9032 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0908 11:15:12.049978    9032 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 11:15:12.082591    9032 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I0908 11:15:12.118433    9032 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0908 11:15:12.172639    9032 ssh_runner.go:195] Run: grep 172.20.63.254	control-plane.minikube.internal$ /etc/hosts
	I0908 11:15:12.179383    9032 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.63.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 11:15:12.220771    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:15:12.472069    9032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 11:15:12.543572    9032 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000 for IP: 172.20.59.73
	I0908 11:15:12.543572    9032 certs.go:194] generating shared ca certs ...
	I0908 11:15:12.543673    9032 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:15:12.544721    9032 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0908 11:15:12.545123    9032 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0908 11:15:12.545337    9032 certs.go:256] generating profile certs ...
	I0908 11:15:12.545887    9032 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\client.key
	I0908 11:15:12.545887    9032 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\client.crt with IP's: []
	I0908 11:15:12.661867    9032 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\client.crt ...
	I0908 11:15:12.661867    9032 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\client.crt: {Name:mk982cb9fe6c7582dc197ee82418c9baa0dde8ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:15:12.664225    9032 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\client.key ...
	I0908 11:15:12.664225    9032 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\client.key: {Name:mk58ff292202a11ef18a9e3edabff73fc83409c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:15:12.665638    9032 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key.4e8026d4
	I0908 11:15:12.665638    9032 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt.4e8026d4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.59.73 172.20.63.254]
	I0908 11:15:13.264316    9032 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt.4e8026d4 ...
	I0908 11:15:13.264316    9032 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt.4e8026d4: {Name:mke834e6e230ac291685eba75c0c27404a652f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:15:13.265261    9032 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key.4e8026d4 ...
	I0908 11:15:13.265261    9032 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key.4e8026d4: {Name:mk68689b6356cc39a769c2bbfea500a7d7e99a3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:15:13.267246    9032 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt.4e8026d4 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt
	I0908 11:15:13.281564    9032 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key.4e8026d4 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key
	I0908 11:15:13.283559    9032 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.key
	I0908 11:15:13.283559    9032 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.crt with IP's: []
	I0908 11:15:13.854056    9032 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.crt ...
	I0908 11:15:13.854056    9032 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.crt: {Name:mkcef962eee945cd174f72530a740f24f54057db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:15:13.855744    9032 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.key ...
	I0908 11:15:13.855744    9032 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.key: {Name:mkfd532185dbd2c791d00c24d248d2ec16ac09b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:15:13.856837    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0908 11:15:13.857366    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0908 11:15:13.857545    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0908 11:15:13.857708    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0908 11:15:13.857708    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0908 11:15:13.857708    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0908 11:15:13.858243    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0908 11:15:13.870973    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0908 11:15:13.871968    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem (1338 bytes)
	W0908 11:15:13.872459    9032 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628_empty.pem, impossibly tiny 0 bytes
	I0908 11:15:13.872625    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0908 11:15:13.872625    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0908 11:15:13.873208    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0908 11:15:13.873371    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1671 bytes)
	I0908 11:15:13.874171    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem (1708 bytes)
	I0908 11:15:13.874475    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> /usr/share/ca-certificates/116282.pem
	I0908 11:15:13.874682    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:15:13.874682    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem -> /usr/share/ca-certificates/11628.pem
	I0908 11:15:13.875350    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 11:15:13.928682    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0908 11:15:13.981908    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 11:15:14.029791    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0908 11:15:14.086512    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0908 11:15:14.137945    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 11:15:14.195311    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 11:15:14.247472    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 11:15:14.300731    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem --> /usr/share/ca-certificates/116282.pem (1708 bytes)
	I0908 11:15:14.350818    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 11:15:14.400248    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem --> /usr/share/ca-certificates/11628.pem (1338 bytes)
	I0908 11:15:14.450020    9032 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 11:15:14.495025    9032 ssh_runner.go:195] Run: openssl version
	I0908 11:15:14.515574    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11628.pem && ln -fs /usr/share/ca-certificates/11628.pem /etc/ssl/certs/11628.pem"
	I0908 11:15:14.551261    9032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11628.pem
	I0908 11:15:14.557812    9032 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 10:54 /usr/share/ca-certificates/11628.pem
	I0908 11:15:14.569809    9032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11628.pem
	I0908 11:15:14.593412    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11628.pem /etc/ssl/certs/51391683.0"
	I0908 11:15:14.625859    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116282.pem && ln -fs /usr/share/ca-certificates/116282.pem /etc/ssl/certs/116282.pem"
	I0908 11:15:14.658385    9032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116282.pem
	I0908 11:15:14.666401    9032 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 10:54 /usr/share/ca-certificates/116282.pem
	I0908 11:15:14.678800    9032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116282.pem
	I0908 11:15:14.701256    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116282.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 11:15:14.734830    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 11:15:14.770278    9032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:15:14.778581    9032 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:15:14.789528    9032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:15:14.808915    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 11:15:14.844204    9032 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 11:15:14.851476    9032 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0908 11:15:14.851476    9032 kubeadm.go:392] StartCluster: {Name:ha-331000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:ha-331000 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.59.73 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:15:14.862672    9032 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0908 11:15:14.901218    9032 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 11:15:14.937643    9032 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 11:15:14.966656    9032 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 11:15:14.983879    9032 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 11:15:14.983879    9032 kubeadm.go:157] found existing configuration files:
	
	I0908 11:15:14.997584    9032 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 11:15:15.018336    9032 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 11:15:15.029321    9032 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 11:15:15.062987    9032 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 11:15:15.083778    9032 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 11:15:15.094559    9032 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 11:15:15.124200    9032 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 11:15:15.143296    9032 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 11:15:15.154750    9032 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 11:15:15.186008    9032 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 11:15:15.205299    9032 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 11:15:15.216079    9032 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 11:15:15.236497    9032 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0908 11:15:15.461403    9032 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0908 11:15:35.211083    9032 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0908 11:15:35.211083    9032 kubeadm.go:310] [preflight] Running pre-flight checks
	I0908 11:15:35.211083    9032 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0908 11:15:35.211083    9032 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0908 11:15:35.211083    9032 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0908 11:15:35.211083    9032 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0908 11:15:35.214086    9032 out.go:252]   - Generating certificates and keys ...
	I0908 11:15:35.214086    9032 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0908 11:15:35.214086    9032 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0908 11:15:35.214086    9032 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0908 11:15:35.215089    9032 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0908 11:15:35.215089    9032 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0908 11:15:35.215089    9032 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0908 11:15:35.215089    9032 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0908 11:15:35.215089    9032 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-331000 localhost] and IPs [172.20.59.73 127.0.0.1 ::1]
	I0908 11:15:35.215089    9032 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0908 11:15:35.216089    9032 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-331000 localhost] and IPs [172.20.59.73 127.0.0.1 ::1]
	I0908 11:15:35.216089    9032 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0908 11:15:35.216089    9032 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0908 11:15:35.216089    9032 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0908 11:15:35.216089    9032 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0908 11:15:35.216089    9032 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0908 11:15:35.216089    9032 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0908 11:15:35.216089    9032 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0908 11:15:35.217072    9032 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0908 11:15:35.217072    9032 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0908 11:15:35.217072    9032 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0908 11:15:35.217072    9032 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0908 11:15:35.221071    9032 out.go:252]   - Booting up control plane ...
	I0908 11:15:35.221071    9032 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0908 11:15:35.221071    9032 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0908 11:15:35.221071    9032 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0908 11:15:35.221071    9032 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0908 11:15:35.222078    9032 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0908 11:15:35.222078    9032 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0908 11:15:35.222078    9032 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0908 11:15:35.222078    9032 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0908 11:15:35.222078    9032 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0908 11:15:35.223103    9032 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0908 11:15:35.223103    9032 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002172681s
	I0908 11:15:35.223103    9032 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0908 11:15:35.223103    9032 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://172.20.59.73:8443/livez
	I0908 11:15:35.223103    9032 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0908 11:15:35.224110    9032 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0908 11:15:35.224110    9032 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 3.625328387s
	I0908 11:15:35.224110    9032 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 5.486846989s
	I0908 11:15:35.224110    9032 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 12.284673563s
	I0908 11:15:35.224110    9032 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0908 11:15:35.225070    9032 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0908 11:15:35.225070    9032 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0908 11:15:35.225070    9032 kubeadm.go:310] [mark-control-plane] Marking the node ha-331000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0908 11:15:35.225070    9032 kubeadm.go:310] [bootstrap-token] Using token: wqmjmr.2qioywh307t3wcmb
	I0908 11:15:35.228084    9032 out.go:252]   - Configuring RBAC rules ...
	I0908 11:15:35.229173    9032 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0908 11:15:35.229173    9032 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0908 11:15:35.229173    9032 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0908 11:15:35.230079    9032 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0908 11:15:35.230079    9032 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0908 11:15:35.230079    9032 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0908 11:15:35.230079    9032 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0908 11:15:35.230079    9032 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0908 11:15:35.231082    9032 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0908 11:15:35.231082    9032 kubeadm.go:310] 
	I0908 11:15:35.231082    9032 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0908 11:15:35.231082    9032 kubeadm.go:310] 
	I0908 11:15:35.231082    9032 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0908 11:15:35.231082    9032 kubeadm.go:310] 
	I0908 11:15:35.231082    9032 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0908 11:15:35.231082    9032 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0908 11:15:35.231082    9032 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0908 11:15:35.231082    9032 kubeadm.go:310] 
	I0908 11:15:35.231082    9032 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0908 11:15:35.231082    9032 kubeadm.go:310] 
	I0908 11:15:35.232113    9032 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0908 11:15:35.232113    9032 kubeadm.go:310] 
	I0908 11:15:35.232113    9032 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0908 11:15:35.232113    9032 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0908 11:15:35.232113    9032 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0908 11:15:35.232113    9032 kubeadm.go:310] 
	I0908 11:15:35.232113    9032 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0908 11:15:35.232113    9032 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0908 11:15:35.232113    9032 kubeadm.go:310] 
	I0908 11:15:35.233114    9032 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wqmjmr.2qioywh307t3wcmb \
	I0908 11:15:35.237081    9032 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6f0ed86d1fb618064431da971fb4f5228ff7cd998cb290916759978661fe58e6 \
	I0908 11:15:35.237081    9032 kubeadm.go:310] 	--control-plane 
	I0908 11:15:35.237081    9032 kubeadm.go:310] 
	I0908 11:15:35.237081    9032 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0908 11:15:35.237081    9032 kubeadm.go:310] 
	I0908 11:15:35.237081    9032 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wqmjmr.2qioywh307t3wcmb \
	I0908 11:15:35.237081    9032 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6f0ed86d1fb618064431da971fb4f5228ff7cd998cb290916759978661fe58e6 
	I0908 11:15:35.238163    9032 cni.go:84] Creating CNI manager for ""
	I0908 11:15:35.238163    9032 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0908 11:15:35.246087    9032 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0908 11:15:35.260092    9032 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0908 11:15:35.270116    9032 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0908 11:15:35.270189    9032 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0908 11:15:35.321105    9032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0908 11:15:35.762326    9032 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 11:15:35.777996    9032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:15:35.780998    9032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-331000 minikube.k8s.io/updated_at=2025_09_08T11_15_35_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64 minikube.k8s.io/name=ha-331000 minikube.k8s.io/primary=true
	I0908 11:15:35.805782    9032 ops.go:34] apiserver oom_adj: -16
	I0908 11:15:36.105994    9032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:15:36.330346    9032 kubeadm.go:1105] duration metric: took 567.8222ms to wait for elevateKubeSystemPrivileges
	I0908 11:15:36.330476    9032 kubeadm.go:394] duration metric: took 21.4787318s to StartCluster
	I0908 11:15:36.330476    9032 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:15:36.330476    9032 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0908 11:15:36.332234    9032 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:15:36.333327    9032 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0908 11:15:36.333327    9032 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.20.59.73 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0908 11:15:36.334013    9032 start.go:241] waiting for startup goroutines ...
	I0908 11:15:36.333954    9032 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 11:15:36.334108    9032 addons.go:69] Setting storage-provisioner=true in profile "ha-331000"
	I0908 11:15:36.334108    9032 addons.go:69] Setting default-storageclass=true in profile "ha-331000"
	I0908 11:15:36.334300    9032 addons.go:238] Setting addon storage-provisioner=true in "ha-331000"
	I0908 11:15:36.334460    9032 config.go:182] Loaded profile config "ha-331000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 11:15:36.334460    9032 host.go:66] Checking if "ha-331000" exists ...
	I0908 11:15:36.334460    9032 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-331000"
	I0908 11:15:36.335489    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:15:36.335951    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:15:36.568981    9032 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0908 11:15:37.178804    9032 start.go:976] {"host.minikube.internal": 172.20.48.1} host record injected into CoreDNS's ConfigMap
	I0908 11:15:38.640645    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:15:38.641032    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:15:38.644728    9032 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 11:15:38.647714    9032 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 11:15:38.647882    9032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 11:15:38.647944    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:15:38.872418    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:15:38.872418    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:15:38.874255    9032 kapi.go:59] client config for ha-331000: &rest.Config{Host:"https://172.20.63.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-331000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-331000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2a967c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0908 11:15:38.875929    9032 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0908 11:15:38.875929    9032 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0908 11:15:38.875929    9032 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0908 11:15:38.875929    9032 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0908 11:15:38.875929    9032 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0908 11:15:38.875929    9032 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0908 11:15:38.876476    9032 addons.go:238] Setting addon default-storageclass=true in "ha-331000"
	I0908 11:15:38.876530    9032 host.go:66] Checking if "ha-331000" exists ...
	I0908 11:15:38.877694    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:15:41.204331    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:15:41.204331    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:15:41.204331    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:15:41.279534    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:15:41.279534    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:15:41.280476    9032 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 11:15:41.280476    9032 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 11:15:41.280603    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:15:43.559155    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:15:43.559713    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:15:43.559713    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:15:43.976897    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:15:43.977673    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:15:43.978132    9032 sshutil.go:53] new ssh client: &{IP:172.20.59.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\id_rsa Username:docker}
	I0908 11:15:44.122485    9032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 11:15:45.368004    9032 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.2455032s)
	I0908 11:15:46.185317    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:15:46.185317    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:15:46.185317    9032 sshutil.go:53] new ssh client: &{IP:172.20.59.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\id_rsa Username:docker}
	I0908 11:15:46.315402    9032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 11:15:46.506335    9032 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0908 11:15:46.513172    9032 addons.go:514] duration metric: took 10.1797173s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0908 11:15:46.513172    9032 start.go:246] waiting for cluster config update ...
	I0908 11:15:46.513172    9032 start.go:255] writing updated cluster config ...
	I0908 11:15:46.518151    9032 out.go:203] 
	I0908 11:15:46.534491    9032 config.go:182] Loaded profile config "ha-331000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 11:15:46.534662    9032 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\config.json ...
	I0908 11:15:46.539780    9032 out.go:179] * Starting "ha-331000-m02" control-plane node in "ha-331000" cluster
	I0908 11:15:46.543589    9032 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0908 11:15:46.543589    9032 cache.go:58] Caching tarball of preloaded images
	I0908 11:15:46.543589    9032 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0908 11:15:46.543589    9032 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0908 11:15:46.544508    9032 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\config.json ...
	I0908 11:15:46.550848    9032 start.go:360] acquireMachinesLock for ha-331000-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0908 11:15:46.551704    9032 start.go:364] duration metric: took 856.3µs to acquireMachinesLock for "ha-331000-m02"
	I0908 11:15:46.551849    9032 start.go:93] Provisioning new machine with config: &{Name:ha-331000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.34.0 ClusterName:ha-331000 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.59.73 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:
0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0908 11:15:46.551849    9032 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0908 11:15:46.558882    9032 out.go:252] * Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0908 11:15:46.558882    9032 start.go:159] libmachine.API.Create for "ha-331000" (driver="hyperv")
	I0908 11:15:46.559481    9032 client.go:168] LocalClient.Create starting
	I0908 11:15:46.559838    9032 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0908 11:15:46.559838    9032 main.go:141] libmachine: Decoding PEM data...
	I0908 11:15:46.559838    9032 main.go:141] libmachine: Parsing certificate...
	I0908 11:15:46.560563    9032 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0908 11:15:46.560790    9032 main.go:141] libmachine: Decoding PEM data...
	I0908 11:15:46.560790    9032 main.go:141] libmachine: Parsing certificate...
	I0908 11:15:46.560790    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0908 11:15:48.428006    9032 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0908 11:15:48.428006    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:15:48.429001    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0908 11:15:50.194992    9032 main.go:141] libmachine: [stdout =====>] : False
	
	I0908 11:15:50.196079    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:15:50.196079    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0908 11:15:51.709546    9032 main.go:141] libmachine: [stdout =====>] : True
	
	I0908 11:15:51.710364    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:15:51.710364    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0908 11:15:55.207336    9032 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0908 11:15:55.207336    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:15:55.210529    9032 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso...
	I0908 11:15:55.840430    9032 main.go:141] libmachine: Creating SSH key...
	I0908 11:15:55.949265    9032 main.go:141] libmachine: Creating VM...
	I0908 11:15:55.949265    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0908 11:15:58.787680    9032 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0908 11:15:58.787680    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:15:58.788466    9032 main.go:141] libmachine: Using switch "Default Switch"
	I0908 11:15:58.788532    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0908 11:16:00.670147    9032 main.go:141] libmachine: [stdout =====>] : True
	
	I0908 11:16:00.671089    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:00.671089    9032 main.go:141] libmachine: Creating VHD
	I0908 11:16:00.671089    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0908 11:16:04.306654    9032 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5393F48A-195E-4D61-B4F5-BAA199D68F00
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0908 11:16:04.307258    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:04.307258    9032 main.go:141] libmachine: Writing magic tar header
	I0908 11:16:04.307258    9032 main.go:141] libmachine: Writing SSH key tar header
	I0908 11:16:04.321222    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0908 11:16:07.442033    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:16:07.442033    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:07.442033    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m02\disk.vhd' -SizeBytes 20000MB
	I0908 11:16:09.950684    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:16:09.950684    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:09.951272    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-331000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 3072MB
	I0908 11:16:13.580447    9032 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-331000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0908 11:16:13.581261    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:13.581261    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-331000-m02 -DynamicMemoryEnabled $false
	I0908 11:16:15.784325    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:16:15.785116    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:15.785116    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-331000-m02 -Count 2
	I0908 11:16:17.935594    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:16:17.935766    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:17.935766    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-331000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m02\boot2docker.iso'
	I0908 11:16:20.454138    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:16:20.454138    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:20.454138    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-331000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m02\disk.vhd'
	I0908 11:16:23.099129    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:16:23.099129    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:23.099476    9032 main.go:141] libmachine: Starting VM...
	I0908 11:16:23.099476    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-331000-m02
	I0908 11:16:26.195570    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:16:26.195570    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:26.195570    9032 main.go:141] libmachine: Waiting for host to start...
	I0908 11:16:26.195570    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:16:28.483547    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:16:28.483547    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:28.483547    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:16:31.002912    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:16:31.003160    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:32.003937    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:16:34.137931    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:16:34.138026    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:34.138096    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:16:36.662185    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:16:36.662185    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:37.663362    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:16:39.817092    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:16:39.817092    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:39.817656    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:16:42.335513    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:16:42.335513    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:43.336332    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:16:45.508536    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:16:45.508536    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:45.508774    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:16:48.098654    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:16:48.098654    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:49.099375    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:16:51.267915    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:16:51.268086    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:51.268086    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:16:53.824266    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:16:53.824627    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:53.824627    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:16:55.974500    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:16:55.974881    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:55.974881    9032 machine.go:93] provisionDockerMachine start ...
	I0908 11:16:55.975001    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:16:58.106416    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:16:58.106416    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:58.106416    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:17:00.609929    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:17:00.610427    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:00.617152    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:17:00.635974    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.54.101 22 <nil> <nil>}
	I0908 11:17:00.636031    9032 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 11:17:00.772318    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0908 11:17:00.772318    9032 buildroot.go:166] provisioning hostname "ha-331000-m02"
	I0908 11:17:00.772381    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:17:02.856544    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:17:02.856544    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:02.856776    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:17:05.362710    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:17:05.363088    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:05.368676    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:17:05.369256    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.54.101 22 <nil> <nil>}
	I0908 11:17:05.369357    9032 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-331000-m02 && echo "ha-331000-m02" | sudo tee /etc/hostname
	I0908 11:17:05.532364    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-331000-m02
	
	I0908 11:17:05.532511    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:17:07.659030    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:17:07.659628    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:07.659753    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:17:10.109960    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:17:10.109960    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:10.115349    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:17:10.115506    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.54.101 22 <nil> <nil>}
	I0908 11:17:10.115506    9032 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-331000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-331000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-331000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 11:17:10.260836    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 11:17:10.260935    9032 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0908 11:17:10.260935    9032 buildroot.go:174] setting up certificates
	I0908 11:17:10.261019    9032 provision.go:84] configureAuth start
	I0908 11:17:10.261110    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:17:12.383902    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:17:12.383902    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:12.384718    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:17:14.947822    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:17:14.947822    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:14.948189    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:17:17.056997    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:17:17.057977    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:17.058180    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:17:19.520830    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:17:19.521360    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:19.521360    9032 provision.go:143] copyHostCerts
	I0908 11:17:19.521525    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0908 11:17:19.521785    9032 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0908 11:17:19.521785    9032 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0908 11:17:19.522350    9032 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0908 11:17:19.523315    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0908 11:17:19.523315    9032 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0908 11:17:19.523315    9032 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0908 11:17:19.524106    9032 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1671 bytes)
	I0908 11:17:19.525625    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0908 11:17:19.525922    9032 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0908 11:17:19.525922    9032 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0908 11:17:19.526346    9032 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0908 11:17:19.527312    9032 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-331000-m02 san=[127.0.0.1 172.20.54.101 ha-331000-m02 localhost minikube]
	I0908 11:17:19.710288    9032 provision.go:177] copyRemoteCerts
	I0908 11:17:19.722033    9032 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 11:17:19.722196    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:17:21.790613    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:17:21.790771    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:21.790844    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:17:24.326271    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:17:24.326271    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:24.326467    9032 sshutil.go:53] new ssh client: &{IP:172.20.54.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m02\id_rsa Username:docker}
	I0908 11:17:24.432055    9032 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.70989s)
	I0908 11:17:24.432055    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0908 11:17:24.432688    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0908 11:17:24.488203    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0908 11:17:24.488203    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 11:17:24.541286    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0908 11:17:24.541768    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 11:17:24.594172    9032 provision.go:87] duration metric: took 14.332937s to configureAuth
	I0908 11:17:24.594172    9032 buildroot.go:189] setting minikube options for container-runtime
	I0908 11:17:24.595120    9032 config.go:182] Loaded profile config "ha-331000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 11:17:24.595120    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:17:26.691698    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:17:26.691698    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:26.692003    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:17:29.274971    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:17:29.274971    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:29.281534    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:17:29.282268    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.54.101 22 <nil> <nil>}
	I0908 11:17:29.282268    9032 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0908 11:17:29.412282    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0908 11:17:29.412282    9032 buildroot.go:70] root file system type: tmpfs
	I0908 11:17:29.412495    9032 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0908 11:17:29.412587    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:17:31.457090    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:17:31.457710    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:31.457820    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:17:33.955937    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:17:33.955937    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:33.960952    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:17:33.961093    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.54.101 22 <nil> <nil>}
	I0908 11:17:33.961093    9032 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=172.20.59.73"
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0908 11:17:34.129337    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=172.20.59.73
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0908 11:17:34.129337    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:17:36.212244    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:17:36.212244    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:36.212356    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:17:38.704308    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:17:38.705335    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:38.710734    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:17:38.711541    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.54.101 22 <nil> <nil>}
	I0908 11:17:38.711541    9032 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0908 11:17:40.131463    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I0908 11:17:40.131463    9032 machine.go:96] duration metric: took 44.1560267s to provisionDockerMachine
	I0908 11:17:40.131463    9032 client.go:171] duration metric: took 1m53.5705589s to LocalClient.Create
	I0908 11:17:40.131463    9032 start.go:167] duration metric: took 1m53.5711582s to libmachine.API.Create "ha-331000"
	I0908 11:17:40.131463    9032 start.go:293] postStartSetup for "ha-331000-m02" (driver="hyperv")
	I0908 11:17:40.131463    9032 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 11:17:40.144383    9032 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 11:17:40.144383    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:17:42.197302    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:17:42.197302    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:42.197302    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:17:44.648358    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:17:44.648626    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:44.649102    9032 sshutil.go:53] new ssh client: &{IP:172.20.54.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m02\id_rsa Username:docker}
	I0908 11:17:44.760451    9032 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6160103s)
	I0908 11:17:44.773311    9032 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 11:17:44.782522    9032 info.go:137] Remote host: Buildroot 2025.02
	I0908 11:17:44.782637    9032 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0908 11:17:44.783706    9032 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0908 11:17:44.785532    9032 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> 116282.pem in /etc/ssl/certs
	I0908 11:17:44.785700    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> /etc/ssl/certs/116282.pem
	I0908 11:17:44.795092    9032 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 11:17:44.815158    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem --> /etc/ssl/certs/116282.pem (1708 bytes)
	I0908 11:17:44.870331    9032 start.go:296] duration metric: took 4.7388093s for postStartSetup
	I0908 11:17:44.873116    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:17:46.942038    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:17:46.942374    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:46.942374    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:17:49.399186    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:17:49.399362    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:49.399537    9032 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\config.json ...
	I0908 11:17:49.401946    9032 start.go:128] duration metric: took 2m2.8485578s to createHost
	I0908 11:17:49.402026    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:17:51.446698    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:17:51.446698    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:51.446698    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:17:53.907170    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:17:53.907326    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:53.911908    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:17:53.912096    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.54.101 22 <nil> <nil>}
	I0908 11:17:53.912684    9032 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0908 11:17:54.034546    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757330274.053124979
	
	I0908 11:17:54.034546    9032 fix.go:216] guest clock: 1757330274.053124979
	I0908 11:17:54.034546    9032 fix.go:229] Guest: 2025-09-08 11:17:54.053124979 +0000 UTC Remote: 2025-09-08 11:17:49.4019464 +0000 UTC m=+319.133291601 (delta=4.651178579s)
	I0908 11:17:54.034546    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:17:56.082190    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:17:56.082333    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:56.082472    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:17:58.548657    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:17:58.548657    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:58.555819    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:17:58.556236    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.54.101 22 <nil> <nil>}
	I0908 11:17:58.556236    9032 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1757330274
	I0908 11:17:58.708912    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Sep  8 11:17:54 UTC 2025
	
	I0908 11:17:58.709025    9032 fix.go:236] clock set: Mon Sep  8 11:17:54 UTC 2025
	 (err=<nil>)
	I0908 11:17:58.709025    9032 start.go:83] releasing machines lock for "ha-331000-m02", held for 2m12.1556651s
	I0908 11:17:58.709139    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:18:00.746479    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:18:00.746788    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:18:00.746846    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:18:03.272758    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:18:03.272758    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:18:03.277184    9032 out.go:179] * Found network options:
	I0908 11:18:03.279945    9032 out.go:179]   - NO_PROXY=172.20.59.73
	W0908 11:18:03.282709    9032 proxy.go:120] fail to check proxy env: Error ip not in block
	I0908 11:18:03.286004    9032 out.go:179]   - NO_PROXY=172.20.59.73
	W0908 11:18:03.288615    9032 proxy.go:120] fail to check proxy env: Error ip not in block
	W0908 11:18:03.289998    9032 proxy.go:120] fail to check proxy env: Error ip not in block
	I0908 11:18:03.293113    9032 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0908 11:18:03.293113    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:18:03.302628    9032 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 11:18:03.302628    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:18:05.458514    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:18:05.458543    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:18:05.458608    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:18:05.503512    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:18:05.503512    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:18:05.504338    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:18:08.166708    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:18:08.166708    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:18:08.167320    9032 sshutil.go:53] new ssh client: &{IP:172.20.54.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m02\id_rsa Username:docker}
	I0908 11:18:08.198773    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:18:08.198773    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:18:08.199226    9032 sshutil.go:53] new ssh client: &{IP:172.20.54.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m02\id_rsa Username:docker}
	I0908 11:18:08.278196    9032 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.975506s)
	W0908 11:18:08.278196    9032 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0908 11:18:08.289190    9032 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9960152s)
	W0908 11:18:08.289190    9032 start.go:868] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0908 11:18:08.291383    9032 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 11:18:08.329838    9032 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0908 11:18:08.329838    9032 start.go:495] detecting cgroup driver to use...
	I0908 11:18:08.330124    9032 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 11:18:08.382551    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W0908 11:18:08.403072    9032 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0908 11:18:08.403072    9032 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0908 11:18:08.421197    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0908 11:18:08.443726    9032 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0908 11:18:08.454897    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0908 11:18:08.489123    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 11:18:08.521304    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0908 11:18:08.553771    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 11:18:08.583398    9032 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 11:18:08.617814    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0908 11:18:08.658425    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0908 11:18:08.691595    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0908 11:18:08.724398    9032 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 11:18:08.744499    9032 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0908 11:18:08.756530    9032 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0908 11:18:08.789533    9032 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 11:18:08.818604    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:18:09.049273    9032 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0908 11:18:09.106516    9032 start.go:495] detecting cgroup driver to use...
	I0908 11:18:09.117746    9032 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0908 11:18:09.157730    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 11:18:09.197435    9032 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 11:18:09.248145    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 11:18:09.285694    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 11:18:09.323813    9032 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0908 11:18:09.393640    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 11:18:09.418705    9032 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 11:18:09.472956    9032 ssh_runner.go:195] Run: which cri-dockerd
	I0908 11:18:09.493765    9032 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0908 11:18:09.520621    9032 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0908 11:18:09.577214    9032 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0908 11:18:09.824772    9032 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0908 11:18:10.049322    9032 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0908 11:18:10.049322    9032 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0908 11:18:10.104825    9032 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0908 11:18:10.145003    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:18:10.404218    9032 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0908 11:18:11.177668    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 11:18:11.223271    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0908 11:18:11.261852    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0908 11:18:11.297586    9032 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0908 11:18:11.543616    9032 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0908 11:18:11.781366    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:18:12.014988    9032 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0908 11:18:12.090183    9032 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0908 11:18:12.126956    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:18:12.360568    9032 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0908 11:18:12.522501    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0908 11:18:12.551210    9032 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0908 11:18:12.562455    9032 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0908 11:18:12.572659    9032 start.go:563] Will wait 60s for crictl version
	I0908 11:18:12.586018    9032 ssh_runner.go:195] Run: which crictl
	I0908 11:18:12.604563    9032 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 11:18:12.667215    9032 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0908 11:18:12.678840    9032 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0908 11:18:12.729551    9032 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0908 11:18:12.768935    9032 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0908 11:18:12.771496    9032 out.go:179]   - env NO_PROXY=172.20.59.73
	I0908 11:18:12.774284    9032 ip.go:180] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0908 11:18:12.778311    9032 ip.go:194] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0908 11:18:12.778311    9032 ip.go:194] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0908 11:18:12.778311    9032 ip.go:189] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0908 11:18:12.778311    9032 ip.go:215] Found interface: {Index:17 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4f:5e:c2 Flags:up|broadcast|multicast|running}
	I0908 11:18:12.781377    9032 ip.go:218] interface addr: fe80::a43d:dd17:5b4e:e872/64
	I0908 11:18:12.781377    9032 ip.go:218] interface addr: 172.20.48.1/20
	I0908 11:18:12.793068    9032 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0908 11:18:12.801373    9032 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 11:18:12.831281    9032 mustload.go:65] Loading cluster: ha-331000
	I0908 11:18:12.832456    9032 config.go:182] Loaded profile config "ha-331000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 11:18:12.833321    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:18:14.898826    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:18:14.898826    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:18:14.899866    9032 host.go:66] Checking if "ha-331000" exists ...
	I0908 11:18:14.900617    9032 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000 for IP: 172.20.54.101
	I0908 11:18:14.900617    9032 certs.go:194] generating shared ca certs ...
	I0908 11:18:14.900617    9032 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:18:14.901417    9032 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0908 11:18:14.901742    9032 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0908 11:18:14.901945    9032 certs.go:256] generating profile certs ...
	I0908 11:18:14.902654    9032 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\client.key
	I0908 11:18:14.902760    9032 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key.755394d3
	I0908 11:18:14.902887    9032 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt.755394d3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.59.73 172.20.54.101 172.20.63.254]
	I0908 11:18:15.091457    9032 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt.755394d3 ...
	I0908 11:18:15.091457    9032 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt.755394d3: {Name:mkc127c97031bee384e7b4182aa0bfd415af1e8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:18:15.093209    9032 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key.755394d3 ...
	I0908 11:18:15.093209    9032 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key.755394d3: {Name:mke63046484a6d72a0a1d9017f58266a707b2dc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:18:15.093728    9032 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt.755394d3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt
	I0908 11:18:15.109935    9032 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key.755394d3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key
	I0908 11:18:15.110698    9032 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.key
	I0908 11:18:15.110698    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0908 11:18:15.110698    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0908 11:18:15.111886    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0908 11:18:15.111954    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0908 11:18:15.112293    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0908 11:18:15.112522    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0908 11:18:15.122917    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0908 11:18:15.123216    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0908 11:18:15.123937    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem (1338 bytes)
	W0908 11:18:15.124494    9032 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628_empty.pem, impossibly tiny 0 bytes
	I0908 11:18:15.124562    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0908 11:18:15.124936    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0908 11:18:15.125293    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0908 11:18:15.125499    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1671 bytes)
	I0908 11:18:15.126367    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem (1708 bytes)
	I0908 11:18:15.126643    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> /usr/share/ca-certificates/116282.pem
	I0908 11:18:15.126643    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:18:15.126643    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem -> /usr/share/ca-certificates/11628.pem
	I0908 11:18:15.127418    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:18:17.252918    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:18:17.252918    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:18:17.252918    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:18:19.743712    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:18:19.743712    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:18:19.744262    9032 sshutil.go:53] new ssh client: &{IP:172.20.59.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\id_rsa Username:docker}
	I0908 11:18:19.846833    9032 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0908 11:18:19.855101    9032 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0908 11:18:19.888229    9032 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0908 11:18:19.897687    9032 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0908 11:18:19.931135    9032 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0908 11:18:19.938454    9032 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0908 11:18:19.974285    9032 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0908 11:18:19.982469    9032 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0908 11:18:20.015436    9032 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0908 11:18:20.023567    9032 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0908 11:18:20.067132    9032 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0908 11:18:20.076365    9032 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0908 11:18:20.099309    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 11:18:20.153382    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0908 11:18:20.204580    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 11:18:20.253344    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0908 11:18:20.300247    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0908 11:18:20.352308    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 11:18:20.401981    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 11:18:20.452229    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 11:18:20.510802    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem --> /usr/share/ca-certificates/116282.pem (1708 bytes)
	I0908 11:18:20.564312    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 11:18:20.619212    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem --> /usr/share/ca-certificates/11628.pem (1338 bytes)
	I0908 11:18:20.669382    9032 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0908 11:18:20.703164    9032 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0908 11:18:20.736216    9032 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0908 11:18:20.768065    9032 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0908 11:18:20.804071    9032 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0908 11:18:20.838958    9032 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0908 11:18:20.873387    9032 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0908 11:18:20.921666    9032 ssh_runner.go:195] Run: openssl version
	I0908 11:18:20.941689    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116282.pem && ln -fs /usr/share/ca-certificates/116282.pem /etc/ssl/certs/116282.pem"
	I0908 11:18:20.974344    9032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116282.pem
	I0908 11:18:20.981631    9032 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 10:54 /usr/share/ca-certificates/116282.pem
	I0908 11:18:20.991012    9032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116282.pem
	I0908 11:18:21.015028    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116282.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 11:18:21.050940    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 11:18:21.097872    9032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:18:21.109589    9032 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:18:21.122221    9032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:18:21.144256    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 11:18:21.181670    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11628.pem && ln -fs /usr/share/ca-certificates/11628.pem /etc/ssl/certs/11628.pem"
	I0908 11:18:21.216154    9032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11628.pem
	I0908 11:18:21.224143    9032 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 10:54 /usr/share/ca-certificates/11628.pem
	I0908 11:18:21.235325    9032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11628.pem
	I0908 11:18:21.256323    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11628.pem /etc/ssl/certs/51391683.0"
	I0908 11:18:21.291948    9032 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 11:18:21.298559    9032 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0908 11:18:21.298559    9032 kubeadm.go:926] updating node {m02 172.20.54.101 8443 v1.34.0 docker true true} ...
	I0908 11:18:21.298559    9032 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-331000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.54.101
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-331000 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 11:18:21.299099    9032 kube-vip.go:115] generating kube-vip config ...
	I0908 11:18:21.309225    9032 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0908 11:18:21.338558    9032 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0908 11:18:21.338999    9032 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.20.63.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0908 11:18:21.350614    9032 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 11:18:21.368141    9032 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.0': No such file or directory
	
	Initiating transfer...
	I0908 11:18:21.379890    9032 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.0
	I0908 11:18:21.403513    9032 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubeadm
	I0908 11:18:21.403705    9032 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubelet
	I0908 11:18:21.403705    9032 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubectl
	I0908 11:18:22.667427    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:18:22.681459    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubectl -> /var/lib/minikube/binaries/v1.34.0/kubectl
	I0908 11:18:22.692424    9032 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubectl
	I0908 11:18:22.698419    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubelet -> /var/lib/minikube/binaries/v1.34.0/kubelet
	I0908 11:18:22.699446    9032 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0/kubectl': No such file or directory
	I0908 11:18:22.699446    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubectl --> /var/lib/minikube/binaries/v1.34.0/kubectl (60559544 bytes)
	I0908 11:18:22.712568    9032 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubelet
	I0908 11:18:22.827611    9032 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0/kubelet': No such file or directory
	I0908 11:18:22.827611    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubelet --> /var/lib/minikube/binaries/v1.34.0/kubelet (59195684 bytes)
	I0908 11:18:23.198236    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubeadm -> /var/lib/minikube/binaries/v1.34.0/kubeadm
	I0908 11:18:23.216567    9032 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubeadm
	I0908 11:18:23.241737    9032 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0/kubeadm': No such file or directory
	I0908 11:18:23.241737    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubeadm --> /var/lib/minikube/binaries/v1.34.0/kubeadm (74027192 bytes)
	I0908 11:18:24.078405    9032 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0908 11:18:24.101187    9032 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0908 11:18:24.138594    9032 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 11:18:24.174666    9032 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0908 11:18:24.224918    9032 ssh_runner.go:195] Run: grep 172.20.63.254	control-plane.minikube.internal$ /etc/hosts
	I0908 11:18:24.232044    9032 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.63.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 11:18:24.268682    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:18:24.501410    9032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 11:18:24.551792    9032 host.go:66] Checking if "ha-331000" exists ...
	I0908 11:18:24.579268    9032 start.go:317] joinCluster: &{Name:ha-331000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clust
erName:ha-331000 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.59.73 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.54.101 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpir
ation:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:18:24.579268    9032 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0908 11:18:24.579976    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:18:26.702495    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:18:26.703564    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:18:26.703564    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:18:29.334198    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:18:29.334198    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:18:29.334890    9032 sshutil.go:53] new ssh client: &{IP:172.20.59.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\id_rsa Username:docker}
	I0908 11:18:29.588815    9032 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0094841s)
	I0908 11:18:29.588815    9032 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.20.54.101 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0908 11:18:29.590117    9032 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token esrdrt.s724uc2c04tfdq0u --discovery-token-ca-cert-hash sha256:6f0ed86d1fb618064431da971fb4f5228ff7cd998cb290916759978661fe58e6 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-331000-m02 --control-plane --apiserver-advertise-address=172.20.54.101 --apiserver-bind-port=8443"
	I0908 11:19:21.377409    9032 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token esrdrt.s724uc2c04tfdq0u --discovery-token-ca-cert-hash sha256:6f0ed86d1fb618064431da971fb4f5228ff7cd998cb290916759978661fe58e6 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-331000-m02 --control-plane --apiserver-advertise-address=172.20.54.101 --apiserver-bind-port=8443": (51.7865747s)
	I0908 11:19:21.377589    9032 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0908 11:19:22.386354    9032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-331000-m02 minikube.k8s.io/updated_at=2025_09_08T11_19_22_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64 minikube.k8s.io/name=ha-331000 minikube.k8s.io/primary=false
	I0908 11:19:22.580464    9032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-331000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0908 11:19:22.764949    9032 start.go:319] duration metric: took 58.1849501s to joinCluster
	I0908 11:19:22.764949    9032 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.20.54.101 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0908 11:19:22.765945    9032 config.go:182] Loaded profile config "ha-331000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 11:19:22.773206    9032 out.go:179] * Verifying Kubernetes components...
	I0908 11:19:22.787187    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:19:23.323932    9032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 11:19:23.368094    9032 kapi.go:59] client config for ha-331000: &rest.Config{Host:"https://172.20.63.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-331000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-331000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2a967c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0908 11:19:23.368322    9032 kubeadm.go:483] Overriding stale ClientConfig host https://172.20.63.254:8443 with https://172.20.59.73:8443
	I0908 11:19:23.369777    9032 node_ready.go:35] waiting up to 6m0s for node "ha-331000-m02" to be "Ready" ...
	W0908 11:19:25.376510    9032 node_ready.go:57] node "ha-331000-m02" has "Ready":"False" status (will retry)
	W0908 11:19:27.377149    9032 node_ready.go:57] node "ha-331000-m02" has "Ready":"False" status (will retry)
	W0908 11:19:29.377666    9032 node_ready.go:57] node "ha-331000-m02" has "Ready":"False" status (will retry)
	W0908 11:19:31.884734    9032 node_ready.go:57] node "ha-331000-m02" has "Ready":"False" status (will retry)
	W0908 11:19:34.378771    9032 node_ready.go:57] node "ha-331000-m02" has "Ready":"False" status (will retry)
	W0908 11:19:36.876326    9032 node_ready.go:57] node "ha-331000-m02" has "Ready":"False" status (will retry)
	W0908 11:19:38.877658    9032 node_ready.go:57] node "ha-331000-m02" has "Ready":"False" status (will retry)
	W0908 11:19:40.883387    9032 node_ready.go:57] node "ha-331000-m02" has "Ready":"False" status (will retry)
	I0908 11:19:42.876284    9032 node_ready.go:49] node "ha-331000-m02" is "Ready"
	I0908 11:19:42.876359    9032 node_ready.go:38] duration metric: took 19.5063375s for node "ha-331000-m02" to be "Ready" ...
	I0908 11:19:42.876359    9032 api_server.go:52] waiting for apiserver process to appear ...
	I0908 11:19:42.888654    9032 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:19:42.927725    9032 api_server.go:72] duration metric: took 20.162524s to wait for apiserver process to appear ...
	I0908 11:19:42.927815    9032 api_server.go:88] waiting for apiserver healthz status ...
	I0908 11:19:42.927869    9032 api_server.go:253] Checking apiserver healthz at https://172.20.59.73:8443/healthz ...
	I0908 11:19:42.936812    9032 api_server.go:279] https://172.20.59.73:8443/healthz returned 200:
	ok
	I0908 11:19:42.938815    9032 api_server.go:141] control plane version: v1.34.0
	I0908 11:19:42.938815    9032 api_server.go:131] duration metric: took 10.9997ms to wait for apiserver health ...
	I0908 11:19:42.938815    9032 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 11:19:42.947238    9032 system_pods.go:59] 17 kube-system pods found
	I0908 11:19:42.947238    9032 system_pods.go:61] "coredns-66bc5c9577-66pcq" [7d55f59c-2274-4acf-88e6-9d8249a799ec] Running
	I0908 11:19:42.947238    9032 system_pods.go:61] "coredns-66bc5c9577-x595c" [bfc5c253-e38e-4a3f-94b9-fb077529ad73] Running
	I0908 11:19:42.947238    9032 system_pods.go:61] "etcd-ha-331000" [890d6f47-ec6d-4aa4-ab72-2225e83e8acb] Running
	I0908 11:19:42.947238    9032 system_pods.go:61] "etcd-ha-331000-m02" [644ee650-93af-4dfc-9e63-0c6010c65c34] Running
	I0908 11:19:42.947238    9032 system_pods.go:61] "kindnet-mrfp7" [622d9c7c-9041-43af-ad0d-1e0d99f1ae98] Running
	I0908 11:19:42.947238    9032 system_pods.go:61] "kindnet-s8k98" [dc7044c5-20b9-4fbf-9c06-b2f23a5ed855] Running
	I0908 11:19:42.947238    9032 system_pods.go:61] "kube-apiserver-ha-331000" [533211f4-476a-40cb-923d-d6946cb0bfd9] Running
	I0908 11:19:42.947238    9032 system_pods.go:61] "kube-apiserver-ha-331000-m02" [e67a4f62-2619-4cf2-98cc-4e6d89b875dd] Running
	I0908 11:19:42.947238    9032 system_pods.go:61] "kube-controller-manager-ha-331000" [9e07cbfc-5c4f-4cba-b417-651a0d03f65c] Running
	I0908 11:19:42.947238    9032 system_pods.go:61] "kube-controller-manager-ha-331000-m02" [80df16bb-edb3-4a03-98db-78c6cfbc2bc2] Running
	I0908 11:19:42.947238    9032 system_pods.go:61] "kube-proxy-mwwp8" [55328dc6-be8b-4916-aeba-2e0548a7bcfd] Running
	I0908 11:19:42.947238    9032 system_pods.go:61] "kube-proxy-smrc9" [f3ca315f-9042-4fe5-bcb8-4301b3d1ad36] Running
	I0908 11:19:42.947238    9032 system_pods.go:61] "kube-scheduler-ha-331000" [3ea06d6e-8fa2-42fb-9c05-476e41b94f1b] Running
	I0908 11:19:42.947238    9032 system_pods.go:61] "kube-scheduler-ha-331000-m02" [c31e63aa-3501-4822-ae81-7f406ac243ae] Running
	I0908 11:19:42.947238    9032 system_pods.go:61] "kube-vip-ha-331000" [566814e0-10c6-4b7b-a23c-56830dce657d] Running
	I0908 11:19:42.947238    9032 system_pods.go:61] "kube-vip-ha-331000-m02" [3c5112ed-c7e2-4c7a-a428-26fe4cc807d3] Running
	I0908 11:19:42.947238    9032 system_pods.go:61] "storage-provisioner" [91f36133-5872-4bf2-9606-697f746f797f] Running
	I0908 11:19:42.947238    9032 system_pods.go:74] duration metric: took 8.4226ms to wait for pod list to return data ...
	I0908 11:19:42.947238    9032 default_sa.go:34] waiting for default service account to be created ...
	I0908 11:19:42.953347    9032 default_sa.go:45] found service account: "default"
	I0908 11:19:42.953396    9032 default_sa.go:55] duration metric: took 6.158ms for default service account to be created ...
	I0908 11:19:42.953396    9032 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 11:19:42.960606    9032 system_pods.go:86] 17 kube-system pods found
	I0908 11:19:42.960606    9032 system_pods.go:89] "coredns-66bc5c9577-66pcq" [7d55f59c-2274-4acf-88e6-9d8249a799ec] Running
	I0908 11:19:42.960606    9032 system_pods.go:89] "coredns-66bc5c9577-x595c" [bfc5c253-e38e-4a3f-94b9-fb077529ad73] Running
	I0908 11:19:42.960606    9032 system_pods.go:89] "etcd-ha-331000" [890d6f47-ec6d-4aa4-ab72-2225e83e8acb] Running
	I0908 11:19:42.960794    9032 system_pods.go:89] "etcd-ha-331000-m02" [644ee650-93af-4dfc-9e63-0c6010c65c34] Running
	I0908 11:19:42.960794    9032 system_pods.go:89] "kindnet-mrfp7" [622d9c7c-9041-43af-ad0d-1e0d99f1ae98] Running
	I0908 11:19:42.960794    9032 system_pods.go:89] "kindnet-s8k98" [dc7044c5-20b9-4fbf-9c06-b2f23a5ed855] Running
	I0908 11:19:42.960794    9032 system_pods.go:89] "kube-apiserver-ha-331000" [533211f4-476a-40cb-923d-d6946cb0bfd9] Running
	I0908 11:19:42.960794    9032 system_pods.go:89] "kube-apiserver-ha-331000-m02" [e67a4f62-2619-4cf2-98cc-4e6d89b875dd] Running
	I0908 11:19:42.960794    9032 system_pods.go:89] "kube-controller-manager-ha-331000" [9e07cbfc-5c4f-4cba-b417-651a0d03f65c] Running
	I0908 11:19:42.960794    9032 system_pods.go:89] "kube-controller-manager-ha-331000-m02" [80df16bb-edb3-4a03-98db-78c6cfbc2bc2] Running
	I0908 11:19:42.960794    9032 system_pods.go:89] "kube-proxy-mwwp8" [55328dc6-be8b-4916-aeba-2e0548a7bcfd] Running
	I0908 11:19:42.960794    9032 system_pods.go:89] "kube-proxy-smrc9" [f3ca315f-9042-4fe5-bcb8-4301b3d1ad36] Running
	I0908 11:19:42.960919    9032 system_pods.go:89] "kube-scheduler-ha-331000" [3ea06d6e-8fa2-42fb-9c05-476e41b94f1b] Running
	I0908 11:19:42.960919    9032 system_pods.go:89] "kube-scheduler-ha-331000-m02" [c31e63aa-3501-4822-ae81-7f406ac243ae] Running
	I0908 11:19:42.960919    9032 system_pods.go:89] "kube-vip-ha-331000" [566814e0-10c6-4b7b-a23c-56830dce657d] Running
	I0908 11:19:42.960919    9032 system_pods.go:89] "kube-vip-ha-331000-m02" [3c5112ed-c7e2-4c7a-a428-26fe4cc807d3] Running
	I0908 11:19:42.960919    9032 system_pods.go:89] "storage-provisioner" [91f36133-5872-4bf2-9606-697f746f797f] Running
	I0908 11:19:42.960919    9032 system_pods.go:126] duration metric: took 7.5236ms to wait for k8s-apps to be running ...
	I0908 11:19:42.960919    9032 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 11:19:42.971535    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:19:43.006887    9032 system_svc.go:56] duration metric: took 45.9666ms WaitForService to wait for kubelet
	I0908 11:19:43.006887    9032 kubeadm.go:578] duration metric: took 20.2416846s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 11:19:43.006887    9032 node_conditions.go:102] verifying NodePressure condition ...
	I0908 11:19:43.015011    9032 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 11:19:43.015011    9032 node_conditions.go:123] node cpu capacity is 2
	I0908 11:19:43.015550    9032 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 11:19:43.015550    9032 node_conditions.go:123] node cpu capacity is 2
	I0908 11:19:43.015550    9032 node_conditions.go:105] duration metric: took 8.663ms to run NodePressure ...
	I0908 11:19:43.015550    9032 start.go:241] waiting for startup goroutines ...
	I0908 11:19:43.015710    9032 start.go:255] writing updated cluster config ...
	I0908 11:19:43.020324    9032 out.go:203] 
	I0908 11:19:43.037058    9032 config.go:182] Loaded profile config "ha-331000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 11:19:43.037058    9032 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\config.json ...
	I0908 11:19:43.044269    9032 out.go:179] * Starting "ha-331000-m03" control-plane node in "ha-331000" cluster
	I0908 11:19:43.046899    9032 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0908 11:19:43.046957    9032 cache.go:58] Caching tarball of preloaded images
	I0908 11:19:43.047022    9032 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0908 11:19:43.047555    9032 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0908 11:19:43.047755    9032 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\config.json ...
	I0908 11:19:43.063313    9032 start.go:360] acquireMachinesLock for ha-331000-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0908 11:19:43.063569    9032 start.go:364] duration metric: took 256.1µs to acquireMachinesLock for "ha-331000-m03"
	I0908 11:19:43.063768    9032 start.go:93] Provisioning new machine with config: &{Name:ha-331000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.34.0 ClusterName:ha-331000 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.59.73 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.54.101 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingres
s:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0908 11:19:43.064021    9032 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0908 11:19:43.069612    9032 out.go:252] * Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0908 11:19:43.069876    9032 start.go:159] libmachine.API.Create for "ha-331000" (driver="hyperv")
	I0908 11:19:43.069876    9032 client.go:168] LocalClient.Create starting
	I0908 11:19:43.070680    9032 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0908 11:19:43.070680    9032 main.go:141] libmachine: Decoding PEM data...
	I0908 11:19:43.070680    9032 main.go:141] libmachine: Parsing certificate...
	I0908 11:19:43.070680    9032 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0908 11:19:43.071468    9032 main.go:141] libmachine: Decoding PEM data...
	I0908 11:19:43.071468    9032 main.go:141] libmachine: Parsing certificate...
	I0908 11:19:43.071468    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0908 11:19:44.989307    9032 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0908 11:19:44.989384    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:19:44.989384    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0908 11:19:46.741169    9032 main.go:141] libmachine: [stdout =====>] : False
	
	I0908 11:19:46.741276    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:19:46.741276    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0908 11:19:48.293658    9032 main.go:141] libmachine: [stdout =====>] : True
	
	I0908 11:19:48.293658    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:19:48.294794    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0908 11:19:52.151256    9032 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0908 11:19:52.151349    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:19:52.152786    9032 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso...
	I0908 11:19:52.752995    9032 main.go:141] libmachine: Creating SSH key...
	I0908 11:19:52.858511    9032 main.go:141] libmachine: Creating VM...
	I0908 11:19:52.858511    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0908 11:19:55.925145    9032 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0908 11:19:55.925145    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:19:55.925296    9032 main.go:141] libmachine: Using switch "Default Switch"
	I0908 11:19:55.925434    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0908 11:19:57.781755    9032 main.go:141] libmachine: [stdout =====>] : True
	
	I0908 11:19:57.782114    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:19:57.782114    9032 main.go:141] libmachine: Creating VHD
	I0908 11:19:57.782114    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0908 11:20:01.556936    9032 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 8ED16FBC-0547-451D-A5C7-C13BFEC5F949
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0908 11:20:01.557722    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:01.557722    9032 main.go:141] libmachine: Writing magic tar header
	I0908 11:20:01.557722    9032 main.go:141] libmachine: Writing SSH key tar header
	I0908 11:20:01.571975    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0908 11:20:04.713258    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:20:04.713843    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:04.713843    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m03\disk.vhd' -SizeBytes 20000MB
	I0908 11:20:07.197832    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:20:07.197832    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:07.197962    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-331000-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 3072MB
	I0908 11:20:10.788379    9032 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-331000-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0908 11:20:10.788414    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:10.788414    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-331000-m03 -DynamicMemoryEnabled $false
	I0908 11:20:13.038761    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:20:13.038761    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:13.038978    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-331000-m03 -Count 2
	I0908 11:20:15.184004    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:20:15.184362    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:15.184436    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-331000-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m03\boot2docker.iso'
	I0908 11:20:17.717638    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:20:17.717638    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:17.718635    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-331000-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m03\disk.vhd'
	I0908 11:20:20.377272    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:20:20.377272    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:20.378152    9032 main.go:141] libmachine: Starting VM...
	I0908 11:20:20.378203    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-331000-m03
	I0908 11:20:23.508038    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:20:23.508038    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:23.508038    9032 main.go:141] libmachine: Waiting for host to start...
	I0908 11:20:23.508038    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:20:25.859623    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:20:25.859623    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:25.859623    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:20:28.437268    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:20:28.437268    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:29.438421    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:20:31.691135    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:20:31.691265    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:31.691446    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:20:34.354445    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:20:34.354445    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:35.355702    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:20:37.603186    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:20:37.603186    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:37.603186    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:20:40.159827    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:20:40.159827    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:41.160799    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:20:43.401461    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:20:43.401461    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:43.401581    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:20:45.979936    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:20:45.979936    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:46.980313    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:20:49.261692    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:20:49.262636    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:49.262705    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:20:52.121155    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:20:52.121155    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:52.122201    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:20:54.385031    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:20:54.385031    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:54.385031    9032 machine.go:93] provisionDockerMachine start ...
	I0908 11:20:54.385031    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:20:56.574458    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:20:56.574458    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:56.574748    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:20:59.120434    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:20:59.120757    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:59.127856    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:20:59.128882    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.56.88 22 <nil> <nil>}
	I0908 11:20:59.128882    9032 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 11:20:59.274979    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0908 11:20:59.275072    9032 buildroot.go:166] provisioning hostname "ha-331000-m03"
	I0908 11:20:59.275140    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:21:01.402631    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:21:01.402631    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:01.403018    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:21:03.994412    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:21:03.994722    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:04.000450    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:21:04.000995    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.56.88 22 <nil> <nil>}
	I0908 11:21:04.001096    9032 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-331000-m03 && echo "ha-331000-m03" | sudo tee /etc/hostname
	I0908 11:21:04.171198    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-331000-m03
	
	I0908 11:21:04.171198    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:21:06.299116    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:21:06.299523    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:06.299523    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:21:08.838786    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:21:08.839879    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:08.846030    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:21:08.846618    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.56.88 22 <nil> <nil>}
	I0908 11:21:08.846651    9032 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-331000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-331000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-331000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 11:21:09.008823    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 11:21:09.008823    9032 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0908 11:21:09.008940    9032 buildroot.go:174] setting up certificates
	I0908 11:21:09.008940    9032 provision.go:84] configureAuth start
	I0908 11:21:09.009035    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:21:11.095960    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:21:11.095960    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:11.096855    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:21:13.685526    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:21:13.686557    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:13.686767    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:21:15.772996    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:21:15.773469    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:15.773551    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:21:18.328117    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:21:18.328117    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:18.328117    9032 provision.go:143] copyHostCerts
	I0908 11:21:18.328117    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0908 11:21:18.328117    9032 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0908 11:21:18.328117    9032 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0908 11:21:18.329058    9032 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0908 11:21:18.330007    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0908 11:21:18.330007    9032 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0908 11:21:18.330007    9032 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0908 11:21:18.330804    9032 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0908 11:21:18.332086    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0908 11:21:18.332361    9032 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0908 11:21:18.332361    9032 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0908 11:21:18.332878    9032 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1671 bytes)
	I0908 11:21:18.333534    9032 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-331000-m03 san=[127.0.0.1 172.20.56.88 ha-331000-m03 localhost minikube]
	I0908 11:21:18.650549    9032 provision.go:177] copyRemoteCerts
	I0908 11:21:18.659544    9032 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 11:21:18.659544    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:21:20.807232    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:21:20.807232    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:20.807979    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:21:23.319737    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:21:23.319737    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:23.320250    9032 sshutil.go:53] new ssh client: &{IP:172.20.56.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m03\id_rsa Username:docker}
	I0908 11:21:23.432422    9032 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7728185s)
	I0908 11:21:23.432422    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0908 11:21:23.432422    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 11:21:23.486505    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0908 11:21:23.486505    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0908 11:21:23.539516    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0908 11:21:23.539516    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0908 11:21:23.596048    9032 provision.go:87] duration metric: took 14.5869253s to configureAuth
	I0908 11:21:23.596048    9032 buildroot.go:189] setting minikube options for container-runtime
	I0908 11:21:23.596624    9032 config.go:182] Loaded profile config "ha-331000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 11:21:23.596989    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:21:25.720420    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:21:25.720420    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:25.721090    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:21:28.328722    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:21:28.328722    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:28.336931    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:21:28.337092    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.56.88 22 <nil> <nil>}
	I0908 11:21:28.337092    9032 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0908 11:21:28.487261    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0908 11:21:28.487261    9032 buildroot.go:70] root file system type: tmpfs
	I0908 11:21:28.487490    9032 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0908 11:21:28.487599    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:21:30.593354    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:21:30.593354    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:30.593354    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:21:33.141874    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:21:33.141874    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:33.147778    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:21:33.147778    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.56.88 22 <nil> <nil>}
	I0908 11:21:33.148355    9032 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=172.20.59.73"
	Environment="NO_PROXY=172.20.59.73,172.20.54.101"
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0908 11:21:33.326293    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=172.20.59.73
	Environment=NO_PROXY=172.20.59.73,172.20.54.101
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0908 11:21:33.326293    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:21:35.481077    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:21:35.481880    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:35.481880    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:21:38.057205    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:21:38.057926    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:38.063241    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:21:38.063918    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.56.88 22 <nil> <nil>}
	I0908 11:21:38.063918    9032 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0908 11:21:39.452136    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I0908 11:21:39.452136    9032 machine.go:96] duration metric: took 45.0665419s to provisionDockerMachine
	I0908 11:21:39.452136    9032 client.go:171] duration metric: took 1m56.380817s to LocalClient.Create
	I0908 11:21:39.452136    9032 start.go:167] duration metric: took 1m56.380817s to libmachine.API.Create "ha-331000"
	I0908 11:21:39.452136    9032 start.go:293] postStartSetup for "ha-331000-m03" (driver="hyperv")
	I0908 11:21:39.452136    9032 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 11:21:39.465222    9032 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 11:21:39.465222    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:21:41.583555    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:21:41.583555    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:41.583639    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:21:44.143282    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:21:44.143488    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:44.143964    9032 sshutil.go:53] new ssh client: &{IP:172.20.56.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m03\id_rsa Username:docker}
	I0908 11:21:44.264444    9032 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7991625s)
	I0908 11:21:44.275613    9032 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 11:21:44.283795    9032 info.go:137] Remote host: Buildroot 2025.02
	I0908 11:21:44.283880    9032 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0908 11:21:44.284128    9032 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0908 11:21:44.285463    9032 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> 116282.pem in /etc/ssl/certs
	I0908 11:21:44.285463    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> /etc/ssl/certs/116282.pem
	I0908 11:21:44.295831    9032 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 11:21:44.318408    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem --> /etc/ssl/certs/116282.pem (1708 bytes)
	I0908 11:21:44.376352    9032 start.go:296] duration metric: took 4.9241542s for postStartSetup
	I0908 11:21:44.379710    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:21:46.472651    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:21:46.473445    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:46.473544    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:21:48.999344    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:21:48.999836    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:49.000105    9032 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\config.json ...
	I0908 11:21:49.002660    9032 start.go:128] duration metric: took 2m5.9369462s to createHost
	I0908 11:21:49.002777    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:21:51.080900    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:21:51.080900    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:51.080900    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:21:53.618126    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:21:53.618901    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:53.625116    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:21:53.625651    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.56.88 22 <nil> <nil>}
	I0908 11:21:53.625744    9032 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0908 11:21:53.769059    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757330513.767382949
	
	I0908 11:21:53.769137    9032 fix.go:216] guest clock: 1757330513.767382949
	I0908 11:21:53.769137    9032 fix.go:229] Guest: 2025-09-08 11:21:53.767382949 +0000 UTC Remote: 2025-09-08 11:21:49.0026609 +0000 UTC m=+558.731019101 (delta=4.764722049s)
	I0908 11:21:53.769137    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:21:55.901636    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:21:55.901636    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:55.902130    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:21:58.463792    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:21:58.463792    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:58.471355    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:21:58.472040    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.56.88 22 <nil> <nil>}
	I0908 11:21:58.472124    9032 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1757330513
	I0908 11:21:58.634236    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Sep  8 11:21:53 UTC 2025
	
	I0908 11:21:58.634236    9032 fix.go:236] clock set: Mon Sep  8 11:21:53 UTC 2025
	 (err=<nil>)
	I0908 11:21:58.634236    9032 start.go:83] releasing machines lock for "ha-331000-m03", held for 2m15.5689839s
	I0908 11:21:58.634236    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:22:00.790543    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:22:00.791315    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:22:00.791389    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:22:03.292724    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:22:03.293782    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:22:03.297035    9032 out.go:179] * Found network options:
	I0908 11:22:03.302590    9032 out.go:179]   - NO_PROXY=172.20.59.73,172.20.54.101
	W0908 11:22:03.307595    9032 proxy.go:120] fail to check proxy env: Error ip not in block
	W0908 11:22:03.307595    9032 proxy.go:120] fail to check proxy env: Error ip not in block
	I0908 11:22:03.310591    9032 out.go:179]   - NO_PROXY=172.20.59.73,172.20.54.101
	W0908 11:22:03.316602    9032 proxy.go:120] fail to check proxy env: Error ip not in block
	W0908 11:22:03.316602    9032 proxy.go:120] fail to check proxy env: Error ip not in block
	W0908 11:22:03.319010    9032 proxy.go:120] fail to check proxy env: Error ip not in block
	W0908 11:22:03.319111    9032 proxy.go:120] fail to check proxy env: Error ip not in block
	I0908 11:22:03.321338    9032 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0908 11:22:03.321338    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:22:03.334013    9032 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 11:22:03.334013    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:22:05.561625    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:22:05.562601    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:22:05.562485    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:22:05.562601    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:22:05.562686    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:22:05.562686    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:22:08.207621    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:22:08.207910    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:22:08.208123    9032 sshutil.go:53] new ssh client: &{IP:172.20.56.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m03\id_rsa Username:docker}
	I0908 11:22:08.249218    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:22:08.249218    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:22:08.249663    9032 sshutil.go:53] new ssh client: &{IP:172.20.56.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m03\id_rsa Username:docker}
	I0908 11:22:08.314281    9032 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9928805s)
	W0908 11:22:08.314364    9032 start.go:868] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0908 11:22:08.350255    9032 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0161799s)
	W0908 11:22:08.350255    9032 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0908 11:22:08.363382    9032 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 11:22:08.397632    9032 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0908 11:22:08.397632    9032 start.go:495] detecting cgroup driver to use...
	I0908 11:22:08.398056    9032 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 11:22:08.449391    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W0908 11:22:08.459289    9032 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0908 11:22:08.459356    9032 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0908 11:22:08.487869    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0908 11:22:08.513248    9032 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0908 11:22:08.523988    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0908 11:22:08.560890    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 11:22:08.596510    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0908 11:22:08.634157    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 11:22:08.667723    9032 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 11:22:08.700890    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0908 11:22:08.735841    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0908 11:22:08.769037    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0908 11:22:08.801439    9032 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 11:22:08.820155    9032 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0908 11:22:08.834386    9032 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0908 11:22:08.872615    9032 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 11:22:08.903269    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:22:09.144835    9032 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0908 11:22:09.204982    9032 start.go:495] detecting cgroup driver to use...
	I0908 11:22:09.215465    9032 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0908 11:22:09.250905    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 11:22:09.288405    9032 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 11:22:09.329084    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 11:22:09.366596    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 11:22:09.401357    9032 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0908 11:22:09.467495    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 11:22:09.492227    9032 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 11:22:09.546331    9032 ssh_runner.go:195] Run: which cri-dockerd
	I0908 11:22:09.565814    9032 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0908 11:22:09.587404    9032 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0908 11:22:09.635590    9032 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0908 11:22:09.884840    9032 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0908 11:22:10.109830    9032 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0908 11:22:10.109937    9032 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0908 11:22:10.164114    9032 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0908 11:22:10.200775    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:22:10.455783    9032 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0908 11:22:11.192386    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 11:22:11.229790    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0908 11:22:11.269738    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0908 11:22:11.306911    9032 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0908 11:22:11.548276    9032 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0908 11:22:11.799006    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:22:12.031232    9032 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0908 11:22:12.107021    9032 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0908 11:22:12.147642    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:22:12.401058    9032 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0908 11:22:12.570410    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0908 11:22:12.602409    9032 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0908 11:22:12.620876    9032 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0908 11:22:12.632292    9032 start.go:563] Will wait 60s for crictl version
	I0908 11:22:12.643747    9032 ssh_runner.go:195] Run: which crictl
	I0908 11:22:12.662962    9032 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 11:22:12.721009    9032 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0908 11:22:12.731772    9032 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0908 11:22:12.778545    9032 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0908 11:22:12.815623    9032 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0908 11:22:12.820138    9032 out.go:179]   - env NO_PROXY=172.20.59.73
	I0908 11:22:12.823329    9032 out.go:179]   - env NO_PROXY=172.20.59.73,172.20.54.101
	I0908 11:22:12.825692    9032 ip.go:180] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0908 11:22:12.830601    9032 ip.go:194] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0908 11:22:12.830601    9032 ip.go:194] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0908 11:22:12.830601    9032 ip.go:189] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0908 11:22:12.830601    9032 ip.go:215] Found interface: {Index:17 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4f:5e:c2 Flags:up|broadcast|multicast|running}
	I0908 11:22:12.833099    9032 ip.go:218] interface addr: fe80::a43d:dd17:5b4e:e872/64
	I0908 11:22:12.833099    9032 ip.go:218] interface addr: 172.20.48.1/20
	I0908 11:22:12.844764    9032 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0908 11:22:12.852639    9032 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 11:22:12.877193    9032 mustload.go:65] Loading cluster: ha-331000
	I0908 11:22:12.879633    9032 config.go:182] Loaded profile config "ha-331000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 11:22:12.880478    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:22:14.965550    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:22:14.966367    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:22:14.966367    9032 host.go:66] Checking if "ha-331000" exists ...
	I0908 11:22:14.967400    9032 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000 for IP: 172.20.56.88
	I0908 11:22:14.967460    9032 certs.go:194] generating shared ca certs ...
	I0908 11:22:14.967460    9032 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:22:14.968372    9032 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0908 11:22:14.968849    9032 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0908 11:22:14.969066    9032 certs.go:256] generating profile certs ...
	I0908 11:22:14.969752    9032 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\client.key
	I0908 11:22:14.969967    9032 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key.e6406abe
	I0908 11:22:14.970085    9032 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt.e6406abe with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.59.73 172.20.54.101 172.20.56.88 172.20.63.254]
	I0908 11:22:15.122275    9032 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt.e6406abe ...
	I0908 11:22:15.122275    9032 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt.e6406abe: {Name:mk057b623324d456dec2f27ef6117b08481c86d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:22:15.124333    9032 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key.e6406abe ...
	I0908 11:22:15.124333    9032 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key.e6406abe: {Name:mk1ebc4009e0b98e764cd6b67eb2845cce8f259f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:22:15.125270    9032 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt.e6406abe -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt
	I0908 11:22:15.142205    9032 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key.e6406abe -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key
	I0908 11:22:15.144098    9032 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.key
	I0908 11:22:15.144098    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0908 11:22:15.144098    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0908 11:22:15.144098    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0908 11:22:15.144947    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0908 11:22:15.145127    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0908 11:22:15.145554    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0908 11:22:15.145732    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0908 11:22:15.145978    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0908 11:22:15.146094    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem (1338 bytes)
	W0908 11:22:15.146628    9032 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628_empty.pem, impossibly tiny 0 bytes
	I0908 11:22:15.146877    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0908 11:22:15.147196    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0908 11:22:15.147922    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0908 11:22:15.148336    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1671 bytes)
	I0908 11:22:15.149091    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem (1708 bytes)
	I0908 11:22:15.149122    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> /usr/share/ca-certificates/116282.pem
	I0908 11:22:15.149122    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:22:15.149656    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem -> /usr/share/ca-certificates/11628.pem
	I0908 11:22:15.149929    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:22:17.288154    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:22:17.289104    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:22:17.289286    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:22:19.842364    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:22:19.842364    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:22:19.843763    9032 sshutil.go:53] new ssh client: &{IP:172.20.59.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\id_rsa Username:docker}
	I0908 11:22:19.951453    9032 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0908 11:22:19.959621    9032 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0908 11:22:19.993915    9032 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0908 11:22:20.000916    9032 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0908 11:22:20.036021    9032 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0908 11:22:20.044895    9032 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0908 11:22:20.081356    9032 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0908 11:22:20.089317    9032 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0908 11:22:20.125644    9032 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0908 11:22:20.132652    9032 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0908 11:22:20.166854    9032 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0908 11:22:20.174348    9032 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0908 11:22:20.199308    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 11:22:20.255911    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0908 11:22:20.310231    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 11:22:20.364512    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0908 11:22:20.417575    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0908 11:22:20.472239    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 11:22:20.528239    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 11:22:20.587739    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 11:22:20.645424    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem --> /usr/share/ca-certificates/116282.pem (1708 bytes)
	I0908 11:22:20.706396    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 11:22:20.761883    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem --> /usr/share/ca-certificates/11628.pem (1338 bytes)
	I0908 11:22:20.815121    9032 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0908 11:22:20.852947    9032 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0908 11:22:20.887898    9032 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0908 11:22:20.926289    9032 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0908 11:22:20.968061    9032 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0908 11:22:21.018821    9032 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0908 11:22:21.062823    9032 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0908 11:22:21.122240    9032 ssh_runner.go:195] Run: openssl version
	I0908 11:22:21.144790    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116282.pem && ln -fs /usr/share/ca-certificates/116282.pem /etc/ssl/certs/116282.pem"
	I0908 11:22:21.182991    9032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116282.pem
	I0908 11:22:21.192279    9032 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 10:54 /usr/share/ca-certificates/116282.pem
	I0908 11:22:21.203708    9032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116282.pem
	I0908 11:22:21.225998    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116282.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 11:22:21.263877    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 11:22:21.298309    9032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:22:21.305840    9032 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:22:21.316656    9032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:22:21.340985    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 11:22:21.374188    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11628.pem && ln -fs /usr/share/ca-certificates/11628.pem /etc/ssl/certs/11628.pem"
	I0908 11:22:21.410230    9032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11628.pem
	I0908 11:22:21.418283    9032 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 10:54 /usr/share/ca-certificates/11628.pem
	I0908 11:22:21.429136    9032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11628.pem
	I0908 11:22:21.450522    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11628.pem /etc/ssl/certs/51391683.0"
	I0908 11:22:21.483940    9032 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 11:22:21.493960    9032 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0908 11:22:21.494945    9032 kubeadm.go:926] updating node {m03 172.20.56.88 8443 v1.34.0 docker true true} ...
	I0908 11:22:21.494945    9032 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-331000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.56.88
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-331000 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 11:22:21.494945    9032 kube-vip.go:115] generating kube-vip config ...
	I0908 11:22:21.505888    9032 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0908 11:22:21.540478    9032 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0908 11:22:21.540478    9032 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.20.63.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0908 11:22:21.551774    9032 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 11:22:21.573508    9032 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.0': No such file or directory
	
	Initiating transfer...
	I0908 11:22:21.586850    9032 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.0
	I0908 11:22:21.610122    9032 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubeadm.sha256
	I0908 11:22:21.610122    9032 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubelet.sha256
	I0908 11:22:21.610122    9032 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
	I0908 11:22:21.610122    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubeadm -> /var/lib/minikube/binaries/v1.34.0/kubeadm
	I0908 11:22:21.610122    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubectl -> /var/lib/minikube/binaries/v1.34.0/kubectl
	I0908 11:22:21.625329    9032 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubectl
	I0908 11:22:21.625329    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:22:21.625842    9032 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubeadm
	I0908 11:22:21.639267    9032 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0/kubectl': No such file or directory
	I0908 11:22:21.639439    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubectl --> /var/lib/minikube/binaries/v1.34.0/kubectl (60559544 bytes)
	I0908 11:22:21.682110    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubelet -> /var/lib/minikube/binaries/v1.34.0/kubelet
	I0908 11:22:21.682110    9032 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0/kubeadm': No such file or directory
	I0908 11:22:21.682110    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubeadm --> /var/lib/minikube/binaries/v1.34.0/kubeadm (74027192 bytes)
	I0908 11:22:21.693165    9032 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubelet
	I0908 11:22:21.725913    9032 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0/kubelet': No such file or directory
	I0908 11:22:21.725913    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubelet --> /var/lib/minikube/binaries/v1.34.0/kubelet (59195684 bytes)
	I0908 11:22:22.929193    9032 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0908 11:22:22.952184    9032 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0908 11:22:22.991701    9032 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 11:22:23.035613    9032 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0908 11:22:23.106287    9032 ssh_runner.go:195] Run: grep 172.20.63.254	control-plane.minikube.internal$ /etc/hosts
	I0908 11:22:23.113936    9032 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.63.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 11:22:23.154346    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:22:23.415159    9032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 11:22:23.464173    9032 host.go:66] Checking if "ha-331000" exists ...
	I0908 11:22:23.464173    9032 start.go:317] joinCluster: &{Name:ha-331000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clust
erName:ha-331000 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.59.73 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.54.101 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.20.56.88 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:22:23.464173    9032 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0908 11:22:23.464173    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:22:25.638291    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:22:25.639234    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:22:25.639311    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:22:28.260838    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:22:28.260838    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:22:28.261558    9032 sshutil.go:53] new ssh client: &{IP:172.20.59.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\id_rsa Username:docker}
	I0908 11:22:28.595706    9032 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0": (5.1313418s)
	I0908 11:22:28.595797    9032 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.20.56.88 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0908 11:22:28.595797    9032 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token irms23.r745u42ppm7pmtog --discovery-token-ca-cert-hash sha256:6f0ed86d1fb618064431da971fb4f5228ff7cd998cb290916759978661fe58e6 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-331000-m03 --control-plane --apiserver-advertise-address=172.20.56.88 --apiserver-bind-port=8443"
	I0908 11:23:31.186137    9032 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token irms23.r745u42ppm7pmtog --discovery-token-ca-cert-hash sha256:6f0ed86d1fb618064431da971fb4f5228ff7cd998cb290916759978661fe58e6 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-331000-m03 --control-plane --apiserver-advertise-address=172.20.56.88 --apiserver-bind-port=8443": (1m2.5895509s)
	I0908 11:23:31.186137    9032 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0908 11:23:31.978856    9032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-331000-m03 minikube.k8s.io/updated_at=2025_09_08T11_23_31_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64 minikube.k8s.io/name=ha-331000 minikube.k8s.io/primary=false
	I0908 11:23:32.187302    9032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-331000-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0908 11:23:32.377511    9032 start.go:319] duration metric: took 1m8.9124703s to joinCluster
	I0908 11:23:32.377511    9032 start.go:235] Will wait 6m0s for node &{Name:m03 IP:172.20.56.88 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0908 11:23:32.378520    9032 config.go:182] Loaded profile config "ha-331000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 11:23:32.386523    9032 out.go:179] * Verifying Kubernetes components...
	I0908 11:23:32.400511    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:23:32.847437    9032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 11:23:32.903394    9032 kapi.go:59] client config for ha-331000: &rest.Config{Host:"https://172.20.63.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-331000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-331000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2a967c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0908 11:23:32.903595    9032 kubeadm.go:483] Overriding stale ClientConfig host https://172.20.63.254:8443 with https://172.20.59.73:8443
	I0908 11:23:32.904581    9032 node_ready.go:35] waiting up to 6m0s for node "ha-331000-m03" to be "Ready" ...
	W0908 11:23:34.946251    9032 node_ready.go:57] node "ha-331000-m03" has "Ready":"False" status (will retry)
	W0908 11:23:37.410476    9032 node_ready.go:57] node "ha-331000-m03" has "Ready":"False" status (will retry)
	W0908 11:23:39.412034    9032 node_ready.go:57] node "ha-331000-m03" has "Ready":"False" status (will retry)
	W0908 11:23:41.415064    9032 node_ready.go:57] node "ha-331000-m03" has "Ready":"False" status (will retry)
	W0908 11:23:43.910231    9032 node_ready.go:57] node "ha-331000-m03" has "Ready":"False" status (will retry)
	W0908 11:23:45.911130    9032 node_ready.go:57] node "ha-331000-m03" has "Ready":"False" status (will retry)
	W0908 11:23:48.413774    9032 node_ready.go:57] node "ha-331000-m03" has "Ready":"False" status (will retry)
	W0908 11:23:50.911027    9032 node_ready.go:57] node "ha-331000-m03" has "Ready":"False" status (will retry)
	I0908 11:23:52.912039    9032 node_ready.go:49] node "ha-331000-m03" is "Ready"
	I0908 11:23:52.912039    9032 node_ready.go:38] duration metric: took 20.0071221s for node "ha-331000-m03" to be "Ready" ...
	I0908 11:23:52.912039    9032 api_server.go:52] waiting for apiserver process to appear ...
	I0908 11:23:52.924032    9032 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:23:52.967087    9032 api_server.go:72] duration metric: took 20.589316s to wait for apiserver process to appear ...
	I0908 11:23:52.967186    9032 api_server.go:88] waiting for apiserver healthz status ...
	I0908 11:23:52.967186    9032 api_server.go:253] Checking apiserver healthz at https://172.20.59.73:8443/healthz ...
	I0908 11:23:52.975045    9032 api_server.go:279] https://172.20.59.73:8443/healthz returned 200:
	ok
	I0908 11:23:52.977049    9032 api_server.go:141] control plane version: v1.34.0
	I0908 11:23:52.977049    9032 api_server.go:131] duration metric: took 9.8628ms to wait for apiserver health ...
	I0908 11:23:52.977049    9032 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 11:23:52.988634    9032 system_pods.go:59] 24 kube-system pods found
	I0908 11:23:52.988634    9032 system_pods.go:61] "coredns-66bc5c9577-66pcq" [7d55f59c-2274-4acf-88e6-9d8249a799ec] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "coredns-66bc5c9577-x595c" [bfc5c253-e38e-4a3f-94b9-fb077529ad73] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "etcd-ha-331000" [890d6f47-ec6d-4aa4-ab72-2225e83e8acb] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "etcd-ha-331000-m02" [644ee650-93af-4dfc-9e63-0c6010c65c34] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "etcd-ha-331000-m03" [f3e07fd8-babb-48c8-b2ee-98ac1f0774a6] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kindnet-62t6b" [20cef753-27c5-4104-b55a-e06cd9dfdd13] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kindnet-mrfp7" [622d9c7c-9041-43af-ad0d-1e0d99f1ae98] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kindnet-s8k98" [dc7044c5-20b9-4fbf-9c06-b2f23a5ed855] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kube-apiserver-ha-331000" [533211f4-476a-40cb-923d-d6946cb0bfd9] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kube-apiserver-ha-331000-m02" [e67a4f62-2619-4cf2-98cc-4e6d89b875dd] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kube-apiserver-ha-331000-m03" [54e7e79c-00c9-4495-9ce6-7cff1c216b77] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kube-controller-manager-ha-331000" [9e07cbfc-5c4f-4cba-b417-651a0d03f65c] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kube-controller-manager-ha-331000-m02" [80df16bb-edb3-4a03-98db-78c6cfbc2bc2] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kube-controller-manager-ha-331000-m03" [88083c0b-2e89-4b83-80f5-496186f1c17d] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kube-proxy-kt6wd" [b04aa754-6d79-4baa-81e8-215962b8505d] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kube-proxy-mwwp8" [55328dc6-be8b-4916-aeba-2e0548a7bcfd] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kube-proxy-smrc9" [f3ca315f-9042-4fe5-bcb8-4301b3d1ad36] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kube-scheduler-ha-331000" [3ea06d6e-8fa2-42fb-9c05-476e41b94f1b] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kube-scheduler-ha-331000-m02" [c31e63aa-3501-4822-ae81-7f406ac243ae] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kube-scheduler-ha-331000-m03" [790e3732-e3a1-4450-bf45-9cd8bc369180] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kube-vip-ha-331000" [566814e0-10c6-4b7b-a23c-56830dce657d] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kube-vip-ha-331000-m02" [3c5112ed-c7e2-4c7a-a428-26fe4cc807d3] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kube-vip-ha-331000-m03" [0748fba6-8fdc-46c7-ac09-f0b39aff443d] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "storage-provisioner" [91f36133-5872-4bf2-9606-697f746f797f] Running
	I0908 11:23:52.988634    9032 system_pods.go:74] duration metric: took 11.5848ms to wait for pod list to return data ...
	I0908 11:23:52.988634    9032 default_sa.go:34] waiting for default service account to be created ...
	I0908 11:23:52.995358    9032 default_sa.go:45] found service account: "default"
	I0908 11:23:52.995618    9032 default_sa.go:55] duration metric: took 6.9841ms for default service account to be created ...
	I0908 11:23:52.995618    9032 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 11:23:53.006498    9032 system_pods.go:86] 24 kube-system pods found
	I0908 11:23:53.006578    9032 system_pods.go:89] "coredns-66bc5c9577-66pcq" [7d55f59c-2274-4acf-88e6-9d8249a799ec] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "coredns-66bc5c9577-x595c" [bfc5c253-e38e-4a3f-94b9-fb077529ad73] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "etcd-ha-331000" [890d6f47-ec6d-4aa4-ab72-2225e83e8acb] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "etcd-ha-331000-m02" [644ee650-93af-4dfc-9e63-0c6010c65c34] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "etcd-ha-331000-m03" [f3e07fd8-babb-48c8-b2ee-98ac1f0774a6] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kindnet-62t6b" [20cef753-27c5-4104-b55a-e06cd9dfdd13] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kindnet-mrfp7" [622d9c7c-9041-43af-ad0d-1e0d99f1ae98] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kindnet-s8k98" [dc7044c5-20b9-4fbf-9c06-b2f23a5ed855] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kube-apiserver-ha-331000" [533211f4-476a-40cb-923d-d6946cb0bfd9] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kube-apiserver-ha-331000-m02" [e67a4f62-2619-4cf2-98cc-4e6d89b875dd] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kube-apiserver-ha-331000-m03" [54e7e79c-00c9-4495-9ce6-7cff1c216b77] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kube-controller-manager-ha-331000" [9e07cbfc-5c4f-4cba-b417-651a0d03f65c] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kube-controller-manager-ha-331000-m02" [80df16bb-edb3-4a03-98db-78c6cfbc2bc2] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kube-controller-manager-ha-331000-m03" [88083c0b-2e89-4b83-80f5-496186f1c17d] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kube-proxy-kt6wd" [b04aa754-6d79-4baa-81e8-215962b8505d] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kube-proxy-mwwp8" [55328dc6-be8b-4916-aeba-2e0548a7bcfd] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kube-proxy-smrc9" [f3ca315f-9042-4fe5-bcb8-4301b3d1ad36] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kube-scheduler-ha-331000" [3ea06d6e-8fa2-42fb-9c05-476e41b94f1b] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kube-scheduler-ha-331000-m02" [c31e63aa-3501-4822-ae81-7f406ac243ae] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kube-scheduler-ha-331000-m03" [790e3732-e3a1-4450-bf45-9cd8bc369180] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kube-vip-ha-331000" [566814e0-10c6-4b7b-a23c-56830dce657d] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kube-vip-ha-331000-m02" [3c5112ed-c7e2-4c7a-a428-26fe4cc807d3] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kube-vip-ha-331000-m03" [0748fba6-8fdc-46c7-ac09-f0b39aff443d] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "storage-provisioner" [91f36133-5872-4bf2-9606-697f746f797f] Running
	I0908 11:23:53.006578    9032 system_pods.go:126] duration metric: took 10.9601ms to wait for k8s-apps to be running ...
	I0908 11:23:53.006578    9032 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 11:23:53.017582    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:23:53.049859    9032 system_svc.go:56] duration metric: took 43.2805ms WaitForService to wait for kubelet
	I0908 11:23:53.049964    9032 kubeadm.go:578] duration metric: took 20.6720873s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 11:23:53.049964    9032 node_conditions.go:102] verifying NodePressure condition ...
	I0908 11:23:53.056708    9032 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 11:23:53.056708    9032 node_conditions.go:123] node cpu capacity is 2
	I0908 11:23:53.056708    9032 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 11:23:53.056708    9032 node_conditions.go:123] node cpu capacity is 2
	I0908 11:23:53.056708    9032 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 11:23:53.056708    9032 node_conditions.go:123] node cpu capacity is 2
	I0908 11:23:53.056708    9032 node_conditions.go:105] duration metric: took 6.6757ms to run NodePressure ...
	I0908 11:23:53.056708    9032 start.go:241] waiting for startup goroutines ...
	I0908 11:23:53.057239    9032 start.go:255] writing updated cluster config ...
	I0908 11:23:53.068806    9032 ssh_runner.go:195] Run: rm -f paused
	I0908 11:23:53.076828    9032 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 11:23:53.078364    9032 kapi.go:59] client config for ha-331000: &rest.Config{Host:"https://172.20.63.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-331000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-331000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2a967c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0908 11:23:53.095503    9032 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-66pcq" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:53.106839    9032 pod_ready.go:94] pod "coredns-66bc5c9577-66pcq" is "Ready"
	I0908 11:23:53.106922    9032 pod_ready.go:86] duration metric: took 11.419ms for pod "coredns-66bc5c9577-66pcq" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:53.106922    9032 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-x595c" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:53.117946    9032 pod_ready.go:94] pod "coredns-66bc5c9577-x595c" is "Ready"
	I0908 11:23:53.118033    9032 pod_ready.go:86] duration metric: took 11.1107ms for pod "coredns-66bc5c9577-x595c" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:53.124108    9032 pod_ready.go:83] waiting for pod "etcd-ha-331000" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:53.133755    9032 pod_ready.go:94] pod "etcd-ha-331000" is "Ready"
	I0908 11:23:53.133930    9032 pod_ready.go:86] duration metric: took 9.7356ms for pod "etcd-ha-331000" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:53.133930    9032 pod_ready.go:83] waiting for pod "etcd-ha-331000-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:53.144082    9032 pod_ready.go:94] pod "etcd-ha-331000-m02" is "Ready"
	I0908 11:23:53.144082    9032 pod_ready.go:86] duration metric: took 10.1523ms for pod "etcd-ha-331000-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:53.144249    9032 pod_ready.go:83] waiting for pod "etcd-ha-331000-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:53.280143    9032 request.go:683] "Waited before sending request" delay="135.892ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-331000-m03"
	I0908 11:23:53.479621    9032 request.go:683] "Waited before sending request" delay="191.6738ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/nodes/ha-331000-m03"
	I0908 11:23:53.485187    9032 pod_ready.go:94] pod "etcd-ha-331000-m03" is "Ready"
	I0908 11:23:53.485187    9032 pod_ready.go:86] duration metric: took 340.9334ms for pod "etcd-ha-331000-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:53.680162    9032 request.go:683] "Waited before sending request" delay="194.9726ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0908 11:23:53.687445    9032 pod_ready.go:83] waiting for pod "kube-apiserver-ha-331000" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:53.880214    9032 request.go:683] "Waited before sending request" delay="192.4585ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-331000"
	I0908 11:23:54.080391    9032 request.go:683] "Waited before sending request" delay="194.0726ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/nodes/ha-331000"
	I0908 11:23:54.086720    9032 pod_ready.go:94] pod "kube-apiserver-ha-331000" is "Ready"
	I0908 11:23:54.086792    9032 pod_ready.go:86] duration metric: took 399.1595ms for pod "kube-apiserver-ha-331000" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:54.086792    9032 pod_ready.go:83] waiting for pod "kube-apiserver-ha-331000-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:54.280260    9032 request.go:683] "Waited before sending request" delay="193.3033ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-331000-m02"
	I0908 11:23:54.480309    9032 request.go:683] "Waited before sending request" delay="192.7842ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/nodes/ha-331000-m02"
	I0908 11:23:54.486031    9032 pod_ready.go:94] pod "kube-apiserver-ha-331000-m02" is "Ready"
	I0908 11:23:54.486151    9032 pod_ready.go:86] duration metric: took 399.3543ms for pod "kube-apiserver-ha-331000-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:54.486151    9032 pod_ready.go:83] waiting for pod "kube-apiserver-ha-331000-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:54.680168    9032 request.go:683] "Waited before sending request" delay="194.0148ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-331000-m03"
	I0908 11:23:54.880383    9032 request.go:683] "Waited before sending request" delay="194.3689ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/nodes/ha-331000-m03"
	I0908 11:23:54.886671    9032 pod_ready.go:94] pod "kube-apiserver-ha-331000-m03" is "Ready"
	I0908 11:23:54.886706    9032 pod_ready.go:86] duration metric: took 400.5495ms for pod "kube-apiserver-ha-331000-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:55.080306    9032 request.go:683] "Waited before sending request" delay="193.4188ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I0908 11:23:55.089618    9032 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-331000" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:55.280343    9032 request.go:683] "Waited before sending request" delay="190.5616ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-331000"
	I0908 11:23:55.479866    9032 request.go:683] "Waited before sending request" delay="193.5005ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/nodes/ha-331000"
	I0908 11:23:55.488384    9032 pod_ready.go:94] pod "kube-controller-manager-ha-331000" is "Ready"
	I0908 11:23:55.488587    9032 pod_ready.go:86] duration metric: took 398.8704ms for pod "kube-controller-manager-ha-331000" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:55.488587    9032 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-331000-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:55.679634    9032 request.go:683] "Waited before sending request" delay="190.8003ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-331000-m02"
	I0908 11:23:55.880557    9032 request.go:683] "Waited before sending request" delay="190.404ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/nodes/ha-331000-m02"
	I0908 11:23:55.894392    9032 pod_ready.go:94] pod "kube-controller-manager-ha-331000-m02" is "Ready"
	I0908 11:23:55.894468    9032 pod_ready.go:86] duration metric: took 405.8753ms for pod "kube-controller-manager-ha-331000-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:55.894468    9032 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-331000-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:56.079948    9032 request.go:683] "Waited before sending request" delay="185.3781ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-331000-m03"
	I0908 11:23:56.280181    9032 request.go:683] "Waited before sending request" delay="192.5257ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/nodes/ha-331000-m03"
	I0908 11:23:56.285858    9032 pod_ready.go:94] pod "kube-controller-manager-ha-331000-m03" is "Ready"
	I0908 11:23:56.285858    9032 pod_ready.go:86] duration metric: took 391.3853ms for pod "kube-controller-manager-ha-331000-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:56.481138    9032 request.go:683] "Waited before sending request" delay="195.2773ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0908 11:23:56.487072    9032 pod_ready.go:83] waiting for pod "kube-proxy-kt6wd" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:56.679529    9032 request.go:683] "Waited before sending request" delay="191.9104ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kt6wd"
	I0908 11:23:56.881130    9032 request.go:683] "Waited before sending request" delay="194.9441ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/nodes/ha-331000-m03"
	I0908 11:23:56.887188    9032 pod_ready.go:94] pod "kube-proxy-kt6wd" is "Ready"
	I0908 11:23:56.887188    9032 pod_ready.go:86] duration metric: took 399.5675ms for pod "kube-proxy-kt6wd" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:56.887286    9032 pod_ready.go:83] waiting for pod "kube-proxy-mwwp8" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:57.080083    9032 request.go:683] "Waited before sending request" delay="192.7303ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mwwp8"
	I0908 11:23:57.279841    9032 request.go:683] "Waited before sending request" delay="193.1221ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/nodes/ha-331000-m02"
	I0908 11:23:57.286182    9032 pod_ready.go:94] pod "kube-proxy-mwwp8" is "Ready"
	I0908 11:23:57.286182    9032 pod_ready.go:86] duration metric: took 398.8901ms for pod "kube-proxy-mwwp8" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:57.286270    9032 pod_ready.go:83] waiting for pod "kube-proxy-smrc9" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:57.480183    9032 request.go:683] "Waited before sending request" delay="193.8254ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-smrc9"
	I0908 11:23:57.680428    9032 request.go:683] "Waited before sending request" delay="192.727ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/nodes/ha-331000"
	I0908 11:23:57.696508    9032 pod_ready.go:94] pod "kube-proxy-smrc9" is "Ready"
	I0908 11:23:57.696508    9032 pod_ready.go:86] duration metric: took 410.2334ms for pod "kube-proxy-smrc9" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:57.879753    9032 request.go:683] "Waited before sending request" delay="183.1601ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I0908 11:23:57.889927    9032 pod_ready.go:83] waiting for pod "kube-scheduler-ha-331000" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:58.081820    9032 request.go:683] "Waited before sending request" delay="191.891ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-331000"
	I0908 11:23:58.279927    9032 request.go:683] "Waited before sending request" delay="191.6019ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/nodes/ha-331000"
	I0908 11:23:58.289047    9032 pod_ready.go:94] pod "kube-scheduler-ha-331000" is "Ready"
	I0908 11:23:58.289047    9032 pod_ready.go:86] duration metric: took 399.1154ms for pod "kube-scheduler-ha-331000" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:58.289047    9032 pod_ready.go:83] waiting for pod "kube-scheduler-ha-331000-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:58.480439    9032 request.go:683] "Waited before sending request" delay="191.3891ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-331000-m02"
	I0908 11:23:58.679951    9032 request.go:683] "Waited before sending request" delay="192.7459ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/nodes/ha-331000-m02"
	I0908 11:23:58.686736    9032 pod_ready.go:94] pod "kube-scheduler-ha-331000-m02" is "Ready"
	I0908 11:23:58.686736    9032 pod_ready.go:86] duration metric: took 397.684ms for pod "kube-scheduler-ha-331000-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:58.686871    9032 pod_ready.go:83] waiting for pod "kube-scheduler-ha-331000-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:58.880280    9032 request.go:683] "Waited before sending request" delay="193.2557ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-331000-m03"
	I0908 11:23:59.080081    9032 request.go:683] "Waited before sending request" delay="193.6928ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/nodes/ha-331000-m03"
	I0908 11:23:59.085660    9032 pod_ready.go:94] pod "kube-scheduler-ha-331000-m03" is "Ready"
	I0908 11:23:59.085660    9032 pod_ready.go:86] duration metric: took 398.7843ms for pod "kube-scheduler-ha-331000-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:59.085746    9032 pod_ready.go:40] duration metric: took 6.0087413s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 11:23:59.219961    9032 start.go:617] kubectl: 1.34.0, cluster: 1.34.0 (minor skew: 0)
	I0908 11:23:59.225671    9032 out.go:179] * Done! kubectl is now configured to use "ha-331000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 08 11:15:11 ha-331000 dockerd[1778]: time="2025-09-08T11:15:11.624782749Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint_count ecebe134df39d4547c205555809223e28f161f54e370b5bd9afeecbf5e78deb3], retrying...."
	Sep 08 11:15:11 ha-331000 dockerd[1778]: time="2025-09-08T11:15:11.724453649Z" level=info msg="Loading containers: done."
	Sep 08 11:15:11 ha-331000 dockerd[1778]: time="2025-09-08T11:15:11.758373149Z" level=info msg="Docker daemon" commit=249d679 containerd-snapshotter=false storage-driver=overlay2 version=28.4.0
	Sep 08 11:15:11 ha-331000 dockerd[1778]: time="2025-09-08T11:15:11.758508849Z" level=info msg="Initializing buildkit"
	Sep 08 11:15:11 ha-331000 dockerd[1778]: time="2025-09-08T11:15:11.789908049Z" level=info msg="Completed buildkit initialization"
	Sep 08 11:15:11 ha-331000 dockerd[1778]: time="2025-09-08T11:15:11.802419249Z" level=info msg="Daemon has completed initialization"
	Sep 08 11:15:11 ha-331000 dockerd[1778]: time="2025-09-08T11:15:11.802470149Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 08 11:15:11 ha-331000 dockerd[1778]: time="2025-09-08T11:15:11.802517749Z" level=info msg="API listen on /run/docker.sock"
	Sep 08 11:15:11 ha-331000 dockerd[1778]: time="2025-09-08T11:15:11.802549449Z" level=info msg="API listen on [::]:2376"
	Sep 08 11:15:11 ha-331000 systemd[1]: Started Docker Application Container Engine.
	Sep 08 11:15:22 ha-331000 cri-dockerd[1645]: time="2025-09-08T11:15:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5ca92f744e82a5520833697e05120addbe2bf45d79b817ec6b2194c8d65c4925/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 11:15:22 ha-331000 cri-dockerd[1645]: time="2025-09-08T11:15:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c666b143621f59134c6e2500d43f1c0d6c810fb14829f60d0ef3233d1fc3cb11/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 11:15:22 ha-331000 cri-dockerd[1645]: time="2025-09-08T11:15:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ac8b5cd3e243ad1b413235794365cee8a292862b7d569b0358c408249ed0e1d9/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 11:15:22 ha-331000 cri-dockerd[1645]: time="2025-09-08T11:15:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e7f286153cad9f9408b8ac2a864859e5fdfe368535ac1ea8d9ea387e5d86e10c/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 11:15:22 ha-331000 cri-dockerd[1645]: time="2025-09-08T11:15:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/39738258a04922bf02296b90594789cacf0e92ea0d9f8e6bca73e8bee7b02a6c/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 11:15:32 ha-331000 cri-dockerd[1645]: time="2025-09-08T11:15:32Z" level=info msg="Stop pulling image ghcr.io/kube-vip/kube-vip:v1.0.0: Status: Downloaded newer image for ghcr.io/kube-vip/kube-vip:v1.0.0"
	Sep 08 11:15:35 ha-331000 cri-dockerd[1645]: time="2025-09-08T11:15:35Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 08 11:15:36 ha-331000 cri-dockerd[1645]: time="2025-09-08T11:15:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9d9aaf6382844361d220a584f72aff747f7b31d3c0ea7448320b07331419c869/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 11:15:37 ha-331000 cri-dockerd[1645]: time="2025-09-08T11:15:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7d644b2de2060828a617429cff42a24609158d29262086069e3c9a74893405e0/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 11:15:44 ha-331000 cri-dockerd[1645]: time="2025-09-08T11:15:44Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20250512-df8de77b: Status: Downloaded newer image for kindest/kindnetd:v20250512-df8de77b"
	Sep 08 11:15:59 ha-331000 cri-dockerd[1645]: time="2025-09-08T11:15:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d9f06ca26bb0d46350387ead567b86c32d03c9cdcfc193aa2b23eeed4c17a82d/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 11:15:59 ha-331000 cri-dockerd[1645]: time="2025-09-08T11:15:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c821f225b0bb599592a36aac7bec4ea340c7f9d2b6b9f1795ec0bebb0f557f45/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 11:15:59 ha-331000 cri-dockerd[1645]: time="2025-09-08T11:15:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6e017b041362ad82b2f50619699fbc7817aa174dcfd11fdd7a477c41ac0cee38/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 11:24:38 ha-331000 cri-dockerd[1645]: time="2025-09-08T11:24:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f5353fd2e31b2d7d5559e16026a8ea6c4407aca4807d3e4c9ee40d27783ac82e/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 08 11:24:40 ha-331000 cri-dockerd[1645]: time="2025-09-08T11:24:40Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	119e4da7957c7       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   f5353fd2e31b2       busybox-7b57f96db7-9vn9f
	c347d407ba4cb       52546a367cc9e                                                                                         9 minutes ago        Running             coredns                   0                   c821f225b0bb5       coredns-66bc5c9577-x595c
	1af67a1836ec4       52546a367cc9e                                                                                         9 minutes ago        Running             coredns                   0                   d9f06ca26bb0d       coredns-66bc5c9577-66pcq
	28c6f040dbf0e       6e38f40d628db                                                                                         9 minutes ago        Running             storage-provisioner       0                   6e017b041362a       storage-provisioner
	d20041f7a2f04       kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a              9 minutes ago        Running             kindnet-cni               0                   7d644b2de2060       kindnet-s8k98
	97663746caa0b       df0860106674d                                                                                         10 minutes ago       Running             kube-proxy                0                   9d9aaf6382844       kube-proxy-smrc9
	7ce862c8c2cd1       ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c     10 minutes ago       Running             kube-vip                  0                   39738258a0492       kube-vip-ha-331000
	49f5a74368fb6       5f1f5298c888d                                                                                         10 minutes ago       Running             etcd                      0                   e7f286153cad9       etcd-ha-331000
	ea216735dd19d       46169d968e920                                                                                         10 minutes ago       Running             kube-scheduler            0                   ac8b5cd3e243a       kube-scheduler-ha-331000
	ba99e0fd1b296       a0af72f2ec6d6                                                                                         10 minutes ago       Running             kube-controller-manager   0                   c666b143621f5       kube-controller-manager-ha-331000
	7ac2656037f51       90550c43ad2bc                                                                                         10 minutes ago       Running             kube-apiserver            0                   5ca92f744e82a       kube-apiserver-ha-331000
	
	
	==> coredns [1af67a1836ec] <==
	[INFO] 10.244.2.2:33118 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000138102s
	[INFO] 10.244.2.2:36973 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 89 0.002022518s
	[INFO] 10.244.1.2:58575 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000247902s
	[INFO] 10.244.1.2:41526 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000143001s
	[INFO] 10.244.0.4:53000 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.015138836s
	[INFO] 10.244.0.4:32813 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159301s
	[INFO] 10.244.0.4:56346 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109001s
	[INFO] 10.244.2.2:45140 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.013930125s
	[INFO] 10.244.2.2:60260 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106701s
	[INFO] 10.244.2.2:52878 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000105101s
	[INFO] 10.244.1.2:35720 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149801s
	[INFO] 10.244.1.2:34477 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000296403s
	[INFO] 10.244.0.4:37842 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000428103s
	[INFO] 10.244.0.4:33068 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117401s
	[INFO] 10.244.2.2:50512 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124801s
	[INFO] 10.244.2.2:54937 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000163701s
	[INFO] 10.244.2.2:47278 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000164701s
	[INFO] 10.244.2.2:40642 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000223302s
	[INFO] 10.244.1.2:35632 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117701s
	[INFO] 10.244.1.2:49567 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000385703s
	[INFO] 10.244.0.4:36803 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000160301s
	[INFO] 10.244.0.4:57511 - 5 "PTR IN 1.48.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000181002s
	[INFO] 10.244.2.2:47712 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001138s
	[INFO] 10.244.2.2:45821 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000195901s
	[INFO] 10.244.2.2:44190 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000384703s
	
	
	==> coredns [c347d407ba4c] <==
	[INFO] 10.244.1.2:39878 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150901s
	[INFO] 10.244.1.2:54330 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.041521874s
	[INFO] 10.244.1.2:42556 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126801s
	[INFO] 10.244.1.2:53807 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.025529329s
	[INFO] 10.244.1.2:32944 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000122901s
	[INFO] 10.244.1.2:52641 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000361903s
	[INFO] 10.244.0.4:43577 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000270402s
	[INFO] 10.244.0.4:59291 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000191901s
	[INFO] 10.244.0.4:47363 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155401s
	[INFO] 10.244.0.4:50361 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000333803s
	[INFO] 10.244.0.4:59534 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000131001s
	[INFO] 10.244.2.2:58252 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000230902s
	[INFO] 10.244.2.2:40932 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000226602s
	[INFO] 10.244.2.2:38854 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000168101s
	[INFO] 10.244.2.2:33655 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000120201s
	[INFO] 10.244.2.2:48291 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059001s
	[INFO] 10.244.1.2:57084 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158102s
	[INFO] 10.244.1.2:46607 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000202902s
	[INFO] 10.244.0.4:43722 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000157001s
	[INFO] 10.244.0.4:53189 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000275002s
	[INFO] 10.244.1.2:42829 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000220902s
	[INFO] 10.244.1.2:57669 - 5 "PTR IN 1.48.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0000934s
	[INFO] 10.244.0.4:37278 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136301s
	[INFO] 10.244.0.4:54658 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000078901s
	[INFO] 10.244.2.2:56538 - 5 "PTR IN 1.48.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000263502s
	
	
	==> describe nodes <==
	Name:               ha-331000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-331000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64
	                    minikube.k8s.io/name=ha-331000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T11_15_35_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 11:15:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-331000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 11:25:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 11:25:06 +0000   Mon, 08 Sep 2025 11:15:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 11:25:06 +0000   Mon, 08 Sep 2025 11:15:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 11:25:06 +0000   Mon, 08 Sep 2025 11:15:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 11:25:06 +0000   Mon, 08 Sep 2025 11:15:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.59.73
	  Hostname:    ha-331000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976484Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976484Ki
	  pods:               110
	System Info:
	  Machine ID:                 249b21018bea44f699851389d47a9e54
	  System UUID:                9b619134-0a9b-2d4b-8f6c-7910abeef38c
	  Boot ID:                    4d57f7f4-ac7c-4865-ab08-17acbb07b094
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-9vn9f             0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 coredns-66bc5c9577-66pcq             100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     10m
	  kube-system                 coredns-66bc5c9577-x595c             100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     10m
	  kube-system                 etcd-ha-331000                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         10m
	  kube-system                 kindnet-s8k98                        100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      10m
	  kube-system                 kube-apiserver-ha-331000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-331000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-smrc9                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-331000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-331000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (9%)  390Mi (13%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-331000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-331000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-331000 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node ha-331000 event: Registered Node ha-331000 in Controller
	  Normal  NodeHasSufficientMemory  10m                kubelet          Node ha-331000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                kubelet          Node ha-331000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                kubelet          Node ha-331000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9m44s              kubelet          Node ha-331000 status is now: NodeReady
	  Normal  RegisteredNode           6m27s              node-controller  Node ha-331000 event: Registered Node ha-331000 in Controller
	  Normal  RegisteredNode           2m25s              node-controller  Node ha-331000 event: Registered Node ha-331000 in Controller
	
	
	Name:               ha-331000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-331000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64
	                    minikube.k8s.io/name=ha-331000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_08T11_19_22_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 11:19:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-331000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 11:25:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 11:25:08 +0000   Mon, 08 Sep 2025 11:19:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 11:25:08 +0000   Mon, 08 Sep 2025 11:19:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 11:25:08 +0000   Mon, 08 Sep 2025 11:19:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 11:25:08 +0000   Mon, 08 Sep 2025 11:19:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.54.101
	  Hostname:    ha-331000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	System Info:
	  Machine ID:                 d6dd1047800e48f4b378489e353289dc
	  System UUID:                1218f680-7a65-5643-b365-add7e1fde0c1
	  Boot ID:                    00a93a50-d154-4210-8842-9359f3a59f53
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-2wjzs                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 etcd-ha-331000-m02                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         6m21s
	  kube-system                 kindnet-mrfp7                            100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      6m21s
	  kube-system                 kube-apiserver-ha-331000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-controller-manager-ha-331000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-proxy-mwwp8                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-scheduler-ha-331000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-vip-ha-331000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (5%)  50Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        6m19s  kube-proxy       
	  Normal  RegisteredNode  6m17s  node-controller  Node ha-331000-m02 event: Registered Node ha-331000-m02 in Controller
	  Normal  RegisteredNode  6m17s  node-controller  Node ha-331000-m02 event: Registered Node ha-331000-m02 in Controller
	  Normal  RegisteredNode  2m25s  node-controller  Node ha-331000-m02 event: Registered Node ha-331000-m02 in Controller
	
	
	Name:               ha-331000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-331000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64
	                    minikube.k8s.io/name=ha-331000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_08T11_23_31_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 11:23:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-331000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 11:25:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 11:25:12 +0000   Mon, 08 Sep 2025 11:23:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 11:25:12 +0000   Mon, 08 Sep 2025 11:23:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 11:25:12 +0000   Mon, 08 Sep 2025 11:23:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 11:25:12 +0000   Mon, 08 Sep 2025 11:23:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.56.88
	  Hostname:    ha-331000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	System Info:
	  Machine ID:                 e96735ab0b1c46f99d90f845bc8e1497
	  System UUID:                0b3baf86-40f7-384e-88c6-82fd84416909
	  Boot ID:                    45488445-c5ea-4946-8d4e-4b98b74eca69
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-qhn4b                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 etcd-ha-331000-m03                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         2m11s
	  kube-system                 kindnet-62t6b                            100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      2m12s
	  kube-system                 kube-apiserver-ha-331000-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-controller-manager-ha-331000-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-proxy-kt6wd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 kube-scheduler-ha-331000-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-vip-ha-331000-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (5%)  50Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        2m9s   kube-proxy       
	  Normal  RegisteredNode  2m10s  node-controller  Node ha-331000-m03 event: Registered Node ha-331000-m03 in Controller
	  Normal  RegisteredNode  2m7s   node-controller  Node ha-331000-m03 event: Registered Node ha-331000-m03 in Controller
	  Normal  RegisteredNode  2m7s   node-controller  Node ha-331000-m03 event: Registered Node ha-331000-m03 in Controller
	
	
	==> dmesg <==
	[Sep 8 11:13] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000000] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +0.003125] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.000008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001493] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	              * this clock source is slow. Consider trying other clock sources
	[  +0.148731] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +0.003157] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.016807] (rpcbind)[114]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.560386] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep 8 11:14] kauditd_printk_skb: 96 callbacks suppressed
	[Sep 8 11:15] kauditd_printk_skb: 237 callbacks suppressed
	[  +0.162155] kauditd_printk_skb: 193 callbacks suppressed
	[ +13.121422] kauditd_printk_skb: 174 callbacks suppressed
	[ +11.168531] kauditd_printk_skb: 144 callbacks suppressed
	[  +0.698639] kauditd_printk_skb: 17 callbacks suppressed
	[Sep 8 11:19] kauditd_printk_skb: 92 callbacks suppressed
	[Sep 8 11:23] hrtimer: interrupt took 3015334 ns
	
	
	==> etcd [49f5a74368fb] <==
	{"level":"warn","ts":"2025-09-08T11:23:38.248391Z","caller":"etcdserver/raft.go:387","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"4f39c5f386e4c391","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"7.739041ms"}
	{"level":"info","ts":"2025-09-08T11:23:38.249602Z","caller":"traceutil/trace.go:172","msg":"trace[1637294592] transaction","detail":"{read_only:false; response_revision:1585; number_of_response:1; }","duration":"173.960561ms","start":"2025-09-08T11:23:38.075628Z","end":"2025-09-08T11:23:38.249588Z","steps":["trace[1637294592] 'process raft request'  (duration: 172.618446ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T11:23:40.612933Z","caller":"etcdserver/raft.go:387","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"b23cc5464918f732","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"201.216487ms"}
	{"level":"warn","ts":"2025-09-08T11:23:40.613042Z","caller":"etcdserver/raft.go:387","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"4f39c5f386e4c391","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"201.330188ms"}
	{"level":"info","ts":"2025-09-08T11:23:40.613625Z","caller":"traceutil/trace.go:172","msg":"trace[885159927] transaction","detail":"{read_only:false; response_revision:1590; number_of_response:1; }","duration":"343.845246ms","start":"2025-09-08T11:23:40.269765Z","end":"2025-09-08T11:23:40.613610Z","steps":["trace[885159927] 'process raft request'  (duration: 343.657244ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T11:23:40.613845Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-08T11:23:40.269729Z","time spent":"344.019247ms","remote":"127.0.0.1:50432","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":419,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/plndr-cp-lock\" mod_revision:1587 > success:<request_put:<key:\"/registry/leases/kube-system/plndr-cp-lock\" value_size:369 >> failure:<request_range:<key:\"/registry/leases/kube-system/plndr-cp-lock\" > >"}
	{"level":"info","ts":"2025-09-08T11:23:40.624740Z","caller":"traceutil/trace.go:172","msg":"trace[986537914] linearizableReadLoop","detail":"{readStateIndex:1829; appliedIndex:1830; }","duration":"349.868513ms","start":"2025-09-08T11:23:40.274859Z","end":"2025-09-08T11:23:40.624727Z","steps":["trace[986537914] 'read index received'  (duration: 349.862813ms)","trace[986537914] 'applied index is now lower than readState.Index'  (duration: 4.9µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-08T11:23:40.745726Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"319.619674ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-331000-m03\" limit:1 ","response":"range_response_count:1 size:4093"}
	{"level":"info","ts":"2025-09-08T11:23:40.745801Z","caller":"traceutil/trace.go:172","msg":"trace[486810752] range","detail":"{range_begin:/registry/minions/ha-331000-m03; range_end:; response_count:1; response_revision:1590; }","duration":"319.705175ms","start":"2025-09-08T11:23:40.426084Z","end":"2025-09-08T11:23:40.745789Z","steps":["trace[486810752] 'agreement among raft nodes before linearized reading'  (duration: 226.120028ms)","trace[486810752] 'range keys from in-memory index tree'  (duration: 93.281743ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-08T11:23:40.745848Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-08T11:23:40.426068Z","time spent":"319.757576ms","remote":"127.0.0.1:50258","response type":"/etcdserverpb.KV/Range","request count":0,"request size":35,"response count":1,"response size":4116,"request content":"key:\"/registry/minions/ha-331000-m03\" limit:1 "}
	{"level":"warn","ts":"2025-09-08T11:23:40.746073Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"471.21087ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1110"}
	{"level":"info","ts":"2025-09-08T11:23:40.746095Z","caller":"traceutil/trace.go:172","msg":"trace[1266347259] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1590; }","duration":"471.23317ms","start":"2025-09-08T11:23:40.274855Z","end":"2025-09-08T11:23:40.746088Z","steps":["trace[1266347259] 'agreement among raft nodes before linearized reading'  (duration: 350.237417ms)","trace[1266347259] 'range keys from in-memory index tree'  (duration: 120.932153ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-08T11:23:40.746115Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-08T11:23:40.274846Z","time spent":"471.260771ms","remote":"127.0.0.1:50224","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":1,"response size":1133,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 "}
	{"level":"warn","ts":"2025-09-08T11:23:40.746683Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"309.263359ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T11:23:40.746721Z","caller":"traceutil/trace.go:172","msg":"trace[1905982102] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1590; }","duration":"309.291259ms","start":"2025-09-08T11:23:40.437412Z","end":"2025-09-08T11:23:40.746703Z","steps":["trace[1905982102] 'agreement among raft nodes before linearized reading'  (duration: 214.900004ms)","trace[1905982102] 'range keys from in-memory index tree'  (duration: 94.277954ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-08T11:23:40.746741Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-08T11:23:40.437305Z","time spent":"309.430761ms","remote":"127.0.0.1:49904","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2025-09-08T11:23:40.746949Z","caller":"traceutil/trace.go:172","msg":"trace[1639405282] transaction","detail":"{read_only:false; response_revision:1591; number_of_response:1; }","duration":"283.011065ms","start":"2025-09-08T11:23:40.463929Z","end":"2025-09-08T11:23:40.746940Z","steps":["trace[1639405282] 'process raft request'  (duration: 188.311006ms)","trace[1639405282] 'compare'  (duration: 94.631258ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-08T11:23:41.049354Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"d9d2266b019978c3","to":"b23cc5464918f732","bytes":2286108,"size":"2.3 MB","took":"30.066212334s"}
	{"level":"warn","ts":"2025-09-08T11:24:37.224433Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"142.518516ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T11:24:37.224501Z","caller":"traceutil/trace.go:172","msg":"trace[1442388869] range","detail":"{range_begin:/registry/clusterrolebindings; range_end:; response_count:0; response_revision:1761; }","duration":"142.597217ms","start":"2025-09-08T11:24:37.081890Z","end":"2025-09-08T11:24:37.224488Z","steps":["trace[1442388869] 'agreement among raft nodes before linearized reading'  (duration: 104.539365ms)","trace[1442388869] 'range keys from in-memory index tree'  (duration: 37.958351ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-08T11:24:37.227264Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.54003ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/busybox-7b57f96db7-jhfbm\" limit:1 ","response":"range_response_count:1 size:2286"}
	{"level":"info","ts":"2025-09-08T11:24:37.234927Z","caller":"traceutil/trace.go:172","msg":"trace[462298785] range","detail":"{range_begin:/registry/pods/default/busybox-7b57f96db7-jhfbm; range_end:; response_count:1; response_revision:1761; }","duration":"119.195801ms","start":"2025-09-08T11:24:37.115712Z","end":"2025-09-08T11:24:37.234908Z","steps":["trace[462298785] 'agreement among raft nodes before linearized reading'  (duration: 70.733053ms)","trace[462298785] 'range keys from in-memory index tree'  (duration: 38.543456ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-08T11:25:24.678453Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1077}
	{"level":"info","ts":"2025-09-08T11:25:24.745244Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1077,"took":"66.341742ms","hash":4211525423,"current-db-size-bytes":3637248,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":2101248,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2025-09-08T11:25:24.745436Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":4211525423,"revision":1077,"compact-revision":-1}
	
	
	==> kernel <==
	 11:25:42 up 12 min,  0 users,  load average: 0.18, 0.33, 0.24
	Linux ha-331000 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep  4 13:14:36 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kindnet [d20041f7a2f0] <==
	I0908 11:24:56.302986       1 main.go:324] Node ha-331000-m03 has CIDR [10.244.2.0/24] 
	I0908 11:25:06.299446       1 main.go:297] Handling node with IPs: map[172.20.54.101:{}]
	I0908 11:25:06.299648       1 main.go:324] Node ha-331000-m02 has CIDR [10.244.1.0/24] 
	I0908 11:25:06.299885       1 main.go:297] Handling node with IPs: map[172.20.56.88:{}]
	I0908 11:25:06.299895       1 main.go:324] Node ha-331000-m03 has CIDR [10.244.2.0/24] 
	I0908 11:25:06.299989       1 main.go:297] Handling node with IPs: map[172.20.59.73:{}]
	I0908 11:25:06.299997       1 main.go:301] handling current node
	I0908 11:25:16.295452       1 main.go:297] Handling node with IPs: map[172.20.59.73:{}]
	I0908 11:25:16.295503       1 main.go:301] handling current node
	I0908 11:25:16.295580       1 main.go:297] Handling node with IPs: map[172.20.54.101:{}]
	I0908 11:25:16.295591       1 main.go:324] Node ha-331000-m02 has CIDR [10.244.1.0/24] 
	I0908 11:25:16.296404       1 main.go:297] Handling node with IPs: map[172.20.56.88:{}]
	I0908 11:25:16.296466       1 main.go:324] Node ha-331000-m03 has CIDR [10.244.2.0/24] 
	I0908 11:25:26.303985       1 main.go:297] Handling node with IPs: map[172.20.54.101:{}]
	I0908 11:25:26.304093       1 main.go:324] Node ha-331000-m02 has CIDR [10.244.1.0/24] 
	I0908 11:25:26.304838       1 main.go:297] Handling node with IPs: map[172.20.56.88:{}]
	I0908 11:25:26.304929       1 main.go:324] Node ha-331000-m03 has CIDR [10.244.2.0/24] 
	I0908 11:25:26.305068       1 main.go:297] Handling node with IPs: map[172.20.59.73:{}]
	I0908 11:25:26.305171       1 main.go:301] handling current node
	I0908 11:25:36.305241       1 main.go:297] Handling node with IPs: map[172.20.59.73:{}]
	I0908 11:25:36.305450       1 main.go:301] handling current node
	I0908 11:25:36.305566       1 main.go:297] Handling node with IPs: map[172.20.54.101:{}]
	I0908 11:25:36.305578       1 main.go:324] Node ha-331000-m02 has CIDR [10.244.1.0/24] 
	I0908 11:25:36.305914       1 main.go:297] Handling node with IPs: map[172.20.56.88:{}]
	I0908 11:25:36.305927       1 main.go:324] Node ha-331000-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [7ac2656037f5] <==
	I0908 11:20:39.375961       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:21:44.732174       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:21:45.356023       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:22:49.390702       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:23:01.947281       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:24:10.383374       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:24:16.504215       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0908 11:24:44.331865       1 conn.go:339] Error on socket receive: read tcp 172.20.63.254:8443->172.20.48.1:51234: use of closed network connection
	E0908 11:24:45.040877       1 conn.go:339] Error on socket receive: read tcp 172.20.63.254:8443->172.20.48.1:51236: use of closed network connection
	E0908 11:24:45.553638       1 conn.go:339] Error on socket receive: read tcp 172.20.63.254:8443->172.20.48.1:51238: use of closed network connection
	E0908 11:24:46.162031       1 conn.go:339] Error on socket receive: read tcp 172.20.63.254:8443->172.20.48.1:51240: use of closed network connection
	E0908 11:24:46.668604       1 conn.go:339] Error on socket receive: read tcp 172.20.63.254:8443->172.20.48.1:51242: use of closed network connection
	E0908 11:24:47.189717       1 conn.go:339] Error on socket receive: read tcp 172.20.63.254:8443->172.20.48.1:51244: use of closed network connection
	E0908 11:24:47.689400       1 conn.go:339] Error on socket receive: read tcp 172.20.63.254:8443->172.20.48.1:51246: use of closed network connection
	E0908 11:24:48.261285       1 conn.go:339] Error on socket receive: read tcp 172.20.63.254:8443->172.20.48.1:51249: use of closed network connection
	E0908 11:24:48.792689       1 conn.go:339] Error on socket receive: read tcp 172.20.63.254:8443->172.20.48.1:51251: use of closed network connection
	E0908 11:24:49.717789       1 conn.go:339] Error on socket receive: read tcp 172.20.63.254:8443->172.20.48.1:51254: use of closed network connection
	E0908 11:25:00.250542       1 conn.go:339] Error on socket receive: read tcp 172.20.63.254:8443->172.20.48.1:51256: use of closed network connection
	E0908 11:25:00.773294       1 conn.go:339] Error on socket receive: read tcp 172.20.63.254:8443->172.20.48.1:51258: use of closed network connection
	E0908 11:25:11.300804       1 conn.go:339] Error on socket receive: read tcp 172.20.63.254:8443->172.20.48.1:51260: use of closed network connection
	E0908 11:25:11.800521       1 conn.go:339] Error on socket receive: read tcp 172.20.63.254:8443->172.20.48.1:51264: use of closed network connection
	I0908 11:25:20.965406       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0908 11:25:22.303011       1 conn.go:339] Error on socket receive: read tcp 172.20.63.254:8443->172.20.48.1:51266: use of closed network connection
	I0908 11:25:26.874956       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 11:25:36.226533       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [ba99e0fd1b29] <==
	I0908 11:15:35.138894       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0908 11:15:35.143299       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0908 11:15:35.144201       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0908 11:15:35.145579       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0908 11:15:35.145608       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0908 11:15:35.146246       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0908 11:15:35.147169       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0908 11:15:35.153373       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 11:15:35.161064       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0908 11:15:35.163458       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0908 11:15:35.164032       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-331000" podCIDRs=["10.244.0.0/24"]
	I0908 11:15:35.189439       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0908 11:15:35.192020       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0908 11:15:35.196876       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 11:15:35.196893       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0908 11:15:35.196909       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0908 11:15:35.207865       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0908 11:15:35.214436       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 11:16:00.129999       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0908 11:19:21.180618       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-331000-m02\" does not exist"
	I0908 11:19:21.276021       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-331000-m02" podCIDRs=["10.244.1.0/24"]
	I0908 11:19:25.171635       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-331000-m02"
	I0908 11:23:30.607666       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-331000-m03\" does not exist"
	I0908 11:23:30.672032       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-331000-m03" podCIDRs=["10.244.2.0/24"]
	I0908 11:23:35.464567       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-331000-m03"
	
	
	==> kube-proxy [97663746caa0] <==
	I0908 11:15:37.550833       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 11:15:37.651533       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 11:15:37.651635       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["172.20.59.73"]
	E0908 11:15:37.651830       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 11:15:37.707963       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0908 11:15:37.708058       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0908 11:15:37.708087       1 server_linux.go:132] "Using iptables Proxier"
	I0908 11:15:37.721544       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 11:15:37.722140       1 server.go:527] "Version info" version="v1.34.0"
	I0908 11:15:37.722160       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:15:37.728460       1 config.go:200] "Starting service config controller"
	I0908 11:15:37.731827       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 11:15:37.732282       1 config.go:106] "Starting endpoint slice config controller"
	I0908 11:15:37.732659       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 11:15:37.732832       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 11:15:37.733087       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 11:15:37.731055       1 config.go:309] "Starting node config controller"
	I0908 11:15:37.734538       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 11:15:37.734611       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 11:15:37.833458       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 11:15:37.833458       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 11:15:37.833490       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ea216735dd19] <==
	E0908 11:15:28.100928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0908 11:15:28.258898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0908 11:15:28.265455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0908 11:15:28.271713       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 11:15:28.329165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0908 11:15:28.365273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0908 11:15:28.440804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0908 11:15:28.511098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0908 11:15:28.519525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0908 11:15:28.573194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0908 11:15:28.603573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I0908 11:15:29.806937       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0908 11:19:21.318623       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mrfp7\": pod kindnet-mrfp7 is already assigned to node \"ha-331000-m02\"" plugin="DefaultBinder" pod="kube-system/kindnet-mrfp7" node="ha-331000-m02"
	E0908 11:19:21.319527       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mrfp7\": pod kindnet-mrfp7 is already assigned to node \"ha-331000-m02\"" logger="UnhandledError" pod="kube-system/kindnet-mrfp7"
	E0908 11:19:21.320883       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-mwwp8\": pod kube-proxy-mwwp8 is already assigned to node \"ha-331000-m02\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-mwwp8" node="ha-331000-m02"
	E0908 11:19:21.320966       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-mwwp8\": pod kube-proxy-mwwp8 is already assigned to node \"ha-331000-m02\"" logger="UnhandledError" pod="kube-system/kube-proxy-mwwp8"
	I0908 11:19:21.323382       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-mwwp8" node="ha-331000-m02"
	E0908 11:23:30.788937       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-kt6wd\": pod kube-proxy-kt6wd is already assigned to node \"ha-331000-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-kt6wd" node="ha-331000-m03"
	E0908 11:23:30.789039       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod b04aa754-6d79-4baa-81e8-215962b8505d(kube-system/kube-proxy-kt6wd) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-kt6wd"
	E0908 11:23:30.789068       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-kt6wd\": pod kube-proxy-kt6wd is already assigned to node \"ha-331000-m03\"" logger="UnhandledError" pod="kube-system/kube-proxy-kt6wd"
	I0908 11:23:30.790368       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-kt6wd" node="ha-331000-m03"
	E0908 11:23:30.809903       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-lp8fx\": pod kube-proxy-lp8fx is already assigned to node \"ha-331000-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-lp8fx" node="ha-331000-m03"
	E0908 11:23:30.809968       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod d8be3dbd-99de-407b-a910-e39dbe6edb38(kube-system/kube-proxy-lp8fx) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-lp8fx"
	E0908 11:23:30.809987       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-lp8fx\": pod kube-proxy-lp8fx is already assigned to node \"ha-331000-m03\"" logger="UnhandledError" pod="kube-system/kube-proxy-lp8fx"
	I0908 11:23:30.812920       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-lp8fx" node="ha-331000-m03"
	
	
	==> kubelet <==
	Sep 08 11:15:36 ha-331000 kubelet[2904]: I0908 11:15:36.007864    2904 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ha-331000"
	Sep 08 11:15:36 ha-331000 kubelet[2904]: I0908 11:15:36.008235    2904 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ha-331000"
	Sep 08 11:15:36 ha-331000 kubelet[2904]: E0908 11:15:36.070629    2904 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-331000\" already exists" pod="kube-system/kube-apiserver-ha-331000"
	Sep 08 11:15:36 ha-331000 kubelet[2904]: E0908 11:15:36.078805    2904 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-331000\" already exists" pod="kube-system/kube-controller-manager-ha-331000"
	Sep 08 11:15:36 ha-331000 kubelet[2904]: E0908 11:15:36.108765    2904 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ha-331000\" already exists" pod="kube-system/kube-scheduler-ha-331000"
	Sep 08 11:15:36 ha-331000 kubelet[2904]: I0908 11:15:36.295037    2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-ha-331000" podStartSLOduration=1.2950197669999999 podStartE2EDuration="1.295019767s" podCreationTimestamp="2025-09-08 11:15:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 11:15:36.21773243 +0000 UTC m=+1.639789974" watchObservedRunningTime="2025-09-08 11:15:36.295019767 +0000 UTC m=+1.717077211"
	Sep 08 11:15:36 ha-331000 kubelet[2904]: I0908 11:15:36.457689    2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ha-331000" podStartSLOduration=1.457635961 podStartE2EDuration="1.457635961s" podCreationTimestamp="2025-09-08 11:15:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 11:15:36.417024563 +0000 UTC m=+1.839082107" watchObservedRunningTime="2025-09-08 11:15:36.457635961 +0000 UTC m=+1.879693505"
	Sep 08 11:15:37 ha-331000 kubelet[2904]: I0908 11:15:37.401301    2904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d644b2de2060828a617429cff42a24609158d29262086069e3c9a74893405e0"
	Sep 08 11:15:38 ha-331000 kubelet[2904]: I0908 11:15:38.452832    2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-smrc9" podStartSLOduration=3.4528146 podStartE2EDuration="3.4528146s" podCreationTimestamp="2025-09-08 11:15:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 11:15:38.452749999 +0000 UTC m=+3.874807543" watchObservedRunningTime="2025-09-08 11:15:38.4528146 +0000 UTC m=+3.874872044"
	Sep 08 11:15:58 ha-331000 kubelet[2904]: I0908 11:15:58.597908    2904 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Sep 08 11:15:58 ha-331000 kubelet[2904]: I0908 11:15:58.832978    2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-s8k98" podStartSLOduration=16.970387441 podStartE2EDuration="23.832960357s" podCreationTimestamp="2025-09-08 11:15:35 +0000 UTC" firstStartedPulling="2025-09-08 11:15:37.405014632 +0000 UTC m=+2.827072076" lastFinishedPulling="2025-09-08 11:15:44.267587548 +0000 UTC m=+9.689644992" observedRunningTime="2025-09-08 11:15:46.683632792 +0000 UTC m=+12.105690336" watchObservedRunningTime="2025-09-08 11:15:58.832960357 +0000 UTC m=+24.255017901"
	Sep 08 11:15:58 ha-331000 kubelet[2904]: I0908 11:15:58.891829    2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ppmw\" (UniqueName: \"kubernetes.io/projected/7d55f59c-2274-4acf-88e6-9d8249a799ec-kube-api-access-9ppmw\") pod \"coredns-66bc5c9577-66pcq\" (UID: \"7d55f59c-2274-4acf-88e6-9d8249a799ec\") " pod="kube-system/coredns-66bc5c9577-66pcq"
	Sep 08 11:15:58 ha-331000 kubelet[2904]: I0908 11:15:58.891874    2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgctn\" (UniqueName: \"kubernetes.io/projected/91f36133-5872-4bf2-9606-697f746f797f-kube-api-access-dgctn\") pod \"storage-provisioner\" (UID: \"91f36133-5872-4bf2-9606-697f746f797f\") " pod="kube-system/storage-provisioner"
	Sep 08 11:15:58 ha-331000 kubelet[2904]: I0908 11:15:58.891905    2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lzs5\" (UniqueName: \"kubernetes.io/projected/bfc5c253-e38e-4a3f-94b9-fb077529ad73-kube-api-access-4lzs5\") pod \"coredns-66bc5c9577-x595c\" (UID: \"bfc5c253-e38e-4a3f-94b9-fb077529ad73\") " pod="kube-system/coredns-66bc5c9577-x595c"
	Sep 08 11:15:58 ha-331000 kubelet[2904]: I0908 11:15:58.891926    2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d55f59c-2274-4acf-88e6-9d8249a799ec-config-volume\") pod \"coredns-66bc5c9577-66pcq\" (UID: \"7d55f59c-2274-4acf-88e6-9d8249a799ec\") " pod="kube-system/coredns-66bc5c9577-66pcq"
	Sep 08 11:15:58 ha-331000 kubelet[2904]: I0908 11:15:58.891951    2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/91f36133-5872-4bf2-9606-697f746f797f-tmp\") pod \"storage-provisioner\" (UID: \"91f36133-5872-4bf2-9606-697f746f797f\") " pod="kube-system/storage-provisioner"
	Sep 08 11:15:58 ha-331000 kubelet[2904]: I0908 11:15:58.891972    2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfc5c253-e38e-4a3f-94b9-fb077529ad73-config-volume\") pod \"coredns-66bc5c9577-x595c\" (UID: \"bfc5c253-e38e-4a3f-94b9-fb077529ad73\") " pod="kube-system/coredns-66bc5c9577-x595c"
	Sep 08 11:15:59 ha-331000 kubelet[2904]: I0908 11:15:59.784449    2904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c821f225b0bb599592a36aac7bec4ea340c7f9d2b6b9f1795ec0bebb0f557f45"
	Sep 08 11:15:59 ha-331000 kubelet[2904]: I0908 11:15:59.861879    2904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e017b041362ad82b2f50619699fbc7817aa174dcfd11fdd7a477c41ac0cee38"
	Sep 08 11:15:59 ha-331000 kubelet[2904]: I0908 11:15:59.895963    2904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9f06ca26bb0d46350387ead567b86c32d03c9cdcfc193aa2b23eeed4c17a82d"
	Sep 08 11:16:00 ha-331000 kubelet[2904]: I0908 11:16:00.955110    2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-x595c" podStartSLOduration=24.955093874 podStartE2EDuration="24.955093874s" podCreationTimestamp="2025-09-08 11:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 11:16:00.95040786 +0000 UTC m=+26.372465304" watchObservedRunningTime="2025-09-08 11:16:00.955093874 +0000 UTC m=+26.377151418"
	Sep 08 11:16:01 ha-331000 kubelet[2904]: I0908 11:16:01.042554    2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.04253744 podStartE2EDuration="16.04253744s" podCreationTimestamp="2025-09-08 11:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 11:16:01.004134327 +0000 UTC m=+26.426191871" watchObservedRunningTime="2025-09-08 11:16:01.04253744 +0000 UTC m=+26.464594884"
	Sep 08 11:16:01 ha-331000 kubelet[2904]: I0908 11:16:01.080767    2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-66pcq" podStartSLOduration=25.080750152 podStartE2EDuration="25.080750152s" podCreationTimestamp="2025-09-08 11:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 11:16:01.080509651 +0000 UTC m=+26.502567195" watchObservedRunningTime="2025-09-08 11:16:01.080750152 +0000 UTC m=+26.502807596"
	Sep 08 11:24:37 ha-331000 kubelet[2904]: I0908 11:24:37.101545    2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pkj7\" (UniqueName: \"kubernetes.io/projected/54e7a78b-44aa-46cb-a877-dc73d8d83565-kube-api-access-6pkj7\") pod \"busybox-7b57f96db7-9vn9f\" (UID: \"54e7a78b-44aa-46cb-a877-dc73d8d83565\") " pod="default/busybox-7b57f96db7-9vn9f"
	Sep 08 11:24:38 ha-331000 kubelet[2904]: I0908 11:24:38.196272    2904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5353fd2e31b2d7d5559e16026a8ea6c4407aca4807d3e4c9ee40d27783ac82e"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-331000 -n ha-331000
E0908 11:25:53.434516   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-331000 -n ha-331000: (12.1792876s)
helpers_test.go:269: (dbg) Run:  kubectl --context ha-331000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (68.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (96.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 node stop m02 --alsologtostderr -v 5: (36.1724445s)
ha_test.go:371: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-331000 status --alsologtostderr -v 5: exit status 1 (25.464629s)

                                                
                                                
** stderr ** 
	I0908 11:42:04.955302   13292 out.go:360] Setting OutFile to fd 1752 ...
	I0908 11:42:05.026301   13292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:42:05.026301   13292 out.go:374] Setting ErrFile to fd 1560...
	I0908 11:42:05.026301   13292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:42:05.041308   13292 out.go:368] Setting JSON to false
	I0908 11:42:05.041308   13292 mustload.go:65] Loading cluster: ha-331000
	I0908 11:42:05.041308   13292 notify.go:220] Checking for updates...
	I0908 11:42:05.042298   13292 config.go:182] Loaded profile config "ha-331000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 11:42:05.042298   13292 status.go:174] checking status of ha-331000 ...
	I0908 11:42:05.043319   13292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:42:07.324368   13292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:42:07.324368   13292 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:42:07.324368   13292 status.go:371] ha-331000 host status = "Running" (err=<nil>)
	I0908 11:42:07.324368   13292 host.go:66] Checking if "ha-331000" exists ...
	I0908 11:42:07.325342   13292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:42:09.583216   13292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:42:09.584126   13292 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:42:09.584567   13292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:42:12.311427   13292 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:42:12.311427   13292 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:42:12.311427   13292 host.go:66] Checking if "ha-331000" exists ...
	I0908 11:42:12.325323   13292 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 11:42:12.326225   13292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:42:14.564190   13292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:42:14.565216   13292 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:42:14.565559   13292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:42:17.207237   13292 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:42:17.207917   13292 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:42:17.208364   13292 sshutil.go:53] new ssh client: &{IP:172.20.59.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\id_rsa Username:docker}
	I0908 11:42:17.316679   13292 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9912935s)
	I0908 11:42:17.329039   13292 ssh_runner.go:195] Run: systemctl --version
	I0908 11:42:17.354596   13292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:42:17.387641   13292 kubeconfig.go:125] found "ha-331000" server: "https://172.20.63.254:8443"
	I0908 11:42:17.387748   13292 api_server.go:166] Checking apiserver status ...
	I0908 11:42:17.400387   13292 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:42:17.453260   13292 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2622/cgroup
	W0908 11:42:17.474917   13292 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2622/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0908 11:42:17.486388   13292 ssh_runner.go:195] Run: ls
	I0908 11:42:17.496077   13292 api_server.go:253] Checking apiserver healthz at https://172.20.63.254:8443/healthz ...
	I0908 11:42:17.506473   13292 api_server.go:279] https://172.20.63.254:8443/healthz returned 200:
	ok
	I0908 11:42:17.506473   13292 status.go:463] ha-331000 apiserver status = Running (err=<nil>)
	I0908 11:42:17.506473   13292 status.go:176] ha-331000 status: &{Name:ha-331000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 11:42:17.506473   13292 status.go:174] checking status of ha-331000-m02 ...
	I0908 11:42:17.506473   13292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:42:19.632642   13292 main.go:141] libmachine: [stdout =====>] : Off
	
	I0908 11:42:19.633675   13292 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:42:19.633675   13292 status.go:371] ha-331000-m02 host status = "Stopped" (err=<nil>)
	I0908 11:42:19.633675   13292 status.go:384] host is not running, skipping remaining checks
	I0908 11:42:19.633814   13292 status.go:176] ha-331000-m02 status: &{Name:ha-331000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 11:42:19.633814   13292 status.go:174] checking status of ha-331000-m03 ...
	I0908 11:42:19.634657   13292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:42:21.788155   13292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:42:21.788155   13292 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:42:21.788155   13292 status.go:371] ha-331000-m03 host status = "Running" (err=<nil>)
	I0908 11:42:21.788155   13292 host.go:66] Checking if "ha-331000-m03" exists ...
	I0908 11:42:21.790108   13292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:42:23.930686   13292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:42:23.931907   13292 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:42:23.931907   13292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:42:26.490097   13292 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:42:26.490097   13292 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:42:26.490839   13292 host.go:66] Checking if "ha-331000-m03" exists ...
	I0908 11:42:26.502969   13292 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 11:42:26.502969   13292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:42:28.710184   13292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:42:28.711184   13292 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:42:28.711273   13292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]

                                                
                                                
** /stderr **
ha_test.go:374: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-331000 status --alsologtostderr -v 5" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-331000 -n ha-331000
E0908 11:42:33.449129   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-331000 -n ha-331000: (12.4146328s)
helpers_test.go:252: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 logs -n 25
E0908 11:42:50.366661   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 logs -n 25: (8.6770141s)
helpers_test.go:260: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                     │  PROFILE  │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-331000 cp ha-331000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile984883306\001\cp-test_ha-331000-m03.txt │ ha-331000 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │ 08 Sep 25 11:37 UTC │
	│ ssh     │ ha-331000 ssh -n ha-331000-m03 sudo cat /home/docker/cp-test.txt                                                                                                             │ ha-331000 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:37 UTC │ 08 Sep 25 11:37 UTC │
	│ cp      │ ha-331000 cp ha-331000-m03:/home/docker/cp-test.txt ha-331000:/home/docker/cp-test_ha-331000-m03_ha-331000.txt                                                               │ ha-331000 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:37 UTC │ 08 Sep 25 11:37 UTC │
	│ ssh     │ ha-331000 ssh -n ha-331000-m03 sudo cat /home/docker/cp-test.txt                                                                                                             │ ha-331000 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:37 UTC │ 08 Sep 25 11:37 UTC │
	│ ssh     │ ha-331000 ssh -n ha-331000 sudo cat /home/docker/cp-test_ha-331000-m03_ha-331000.txt                                                                                         │ ha-331000 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:37 UTC │ 08 Sep 25 11:37 UTC │
	│ cp      │ ha-331000 cp ha-331000-m03:/home/docker/cp-test.txt ha-331000-m02:/home/docker/cp-test_ha-331000-m03_ha-331000-m02.txt                                                       │ ha-331000 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:37 UTC │ 08 Sep 25 11:38 UTC │
	│ ssh     │ ha-331000 ssh -n ha-331000-m03 sudo cat /home/docker/cp-test.txt                                                                                                             │ ha-331000 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:38 UTC │ 08 Sep 25 11:38 UTC │
	│ ssh     │ ha-331000 ssh -n ha-331000-m02 sudo cat /home/docker/cp-test_ha-331000-m03_ha-331000-m02.txt                                                                                 │ ha-331000 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:38 UTC │ 08 Sep 25 11:38 UTC │
	│ cp      │ ha-331000 cp ha-331000-m03:/home/docker/cp-test.txt ha-331000-m04:/home/docker/cp-test_ha-331000-m03_ha-331000-m04.txt                                                       │ ha-331000 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:38 UTC │ 08 Sep 25 11:38 UTC │
	│ ssh     │ ha-331000 ssh -n ha-331000-m03 sudo cat /home/docker/cp-test.txt                                                                                                             │ ha-331000 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:38 UTC │ 08 Sep 25 11:38 UTC │
	│ ssh     │ ha-331000 ssh -n ha-331000-m04 sudo cat /home/docker/cp-test_ha-331000-m03_ha-331000-m04.txt                                                                                 │ ha-331000 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:38 UTC │ 08 Sep 25 11:39 UTC │
	│ cp      │ ha-331000 cp testdata\cp-test.txt ha-331000-m04:/home/docker/cp-test.txt                                                                                                     │ ha-331000 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:39 UTC │ 08 Sep 25 11:39 UTC │
	│ ssh     │ ha-331000 ssh -n ha-331000-m04 sudo cat /home/docker/cp-test.txt                                                                                                             │ ha-331000 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:39 UTC │ 08 Sep 25 11:39 UTC │
	│ cp      │ ha-331000 cp ha-331000-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile984883306\001\cp-test_ha-331000-m04.txt │ ha-331000 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:39 UTC │ 08 Sep 25 11:39 UTC │
	│ ssh     │ ha-331000 ssh -n ha-331000-m04 sudo cat /home/docker/cp-test.txt                                                                                                             │ ha-331000 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:39 UTC │ 08 Sep 25 11:39 UTC │
	│ cp      │ ha-331000 cp ha-331000-m04:/home/docker/cp-test.txt ha-331000:/home/docker/cp-test_ha-331000-m04_ha-331000.txt                                                               │ ha-331000 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:39 UTC │ 08 Sep 25 11:39 UTC │
	│ ssh     │ ha-331000 ssh -n ha-331000-m04 sudo cat /home/docker/cp-test.txt                                                                                                             │ ha-331000 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:39 UTC │ 08 Sep 25 11:40 UTC │
	│ ssh     │ ha-331000 ssh -n ha-331000 sudo cat /home/docker/cp-test_ha-331000-m04_ha-331000.txt                                                                                         │ ha-331000 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:40 UTC │ 08 Sep 25 11:40 UTC │
	│ cp      │ ha-331000 cp ha-331000-m04:/home/docker/cp-test.txt ha-331000-m02:/home/docker/cp-test_ha-331000-m04_ha-331000-m02.txt                                                       │ ha-331000 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:40 UTC │ 08 Sep 25 11:40 UTC │
	│ ssh     │ ha-331000 ssh -n ha-331000-m04 sudo cat /home/docker/cp-test.txt                                                                                                             │ ha-331000 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:40 UTC │ 08 Sep 25 11:40 UTC │
	│ ssh     │ ha-331000 ssh -n ha-331000-m02 sudo cat /home/docker/cp-test_ha-331000-m04_ha-331000-m02.txt                                                                                 │ ha-331000 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:40 UTC │ 08 Sep 25 11:40 UTC │
	│ cp      │ ha-331000 cp ha-331000-m04:/home/docker/cp-test.txt ha-331000-m03:/home/docker/cp-test_ha-331000-m04_ha-331000-m03.txt                                                       │ ha-331000 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:40 UTC │ 08 Sep 25 11:41 UTC │
	│ ssh     │ ha-331000 ssh -n ha-331000-m04 sudo cat /home/docker/cp-test.txt                                                                                                             │ ha-331000 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:41 UTC │ 08 Sep 25 11:41 UTC │
	│ ssh     │ ha-331000 ssh -n ha-331000-m03 sudo cat /home/docker/cp-test_ha-331000-m04_ha-331000-m03.txt                                                                                 │ ha-331000 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:41 UTC │ 08 Sep 25 11:41 UTC │
	│ node    │ ha-331000 node stop m02 --alsologtostderr -v 5                                                                                                                               │ ha-331000 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 11:41 UTC │ 08 Sep 25 11:42 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 11:12:30
	Running on machine: minikube6
	Binary: Built with gc go1.24.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 11:12:30.357793    9032 out.go:360] Setting OutFile to fd 1616 ...
	I0908 11:12:30.428709    9032 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:12:30.428709    9032 out.go:374] Setting ErrFile to fd 1280...
	I0908 11:12:30.428709    9032 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:12:30.446928    9032 out.go:368] Setting JSON to false
	I0908 11:12:30.450308    9032 start.go:130] hostinfo: {"hostname":"minikube6","uptime":298802,"bootTime":1757031148,"procs":181,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6282 Build 19045.6282","kernelVersion":"10.0.19045.6282 Build 19045.6282","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0908 11:12:30.450503    9032 start.go:138] gopshost.Virtualization returned error: not implemented yet
	I0908 11:12:30.457055    9032 out.go:179] * [ha-331000] minikube v1.36.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
	I0908 11:12:30.459803    9032 notify.go:220] Checking for updates...
	I0908 11:12:30.461787    9032 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0908 11:12:30.463881    9032 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 11:12:30.466843    9032 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0908 11:12:30.469654    9032 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 11:12:30.474812    9032 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 11:12:30.478251    9032 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 11:12:35.707222    9032 out.go:179] * Using the hyperv driver based on user configuration
	I0908 11:12:35.711183    9032 start.go:304] selected driver: hyperv
	I0908 11:12:35.711183    9032 start.go:918] validating driver "hyperv" against <nil>
	I0908 11:12:35.711183    9032 start.go:929] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 11:12:35.761304    9032 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 11:12:35.762253    9032 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 11:12:35.762253    9032 cni.go:84] Creating CNI manager for ""
	I0908 11:12:35.762253    9032 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0908 11:12:35.762253    9032 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0908 11:12:35.762253    9032 start.go:348] cluster config:
	{Name:ha-331000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-331000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Auto
PauseInterval:1m0s}
	I0908 11:12:35.763216    9032 iso.go:125] acquiring lock: {Name:mk0c8af595f03ef7f7ea249099688f084dfd77f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 11:12:35.767916    9032 out.go:179] * Starting "ha-331000" primary control-plane node in "ha-331000" cluster
	I0908 11:12:35.771960    9032 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0908 11:12:35.772244    9032 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0908 11:12:35.772244    9032 cache.go:58] Caching tarball of preloaded images
	I0908 11:12:35.772244    9032 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0908 11:12:35.772244    9032 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0908 11:12:35.773556    9032 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\config.json ...
	I0908 11:12:35.773870    9032 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\config.json: {Name:mk2586e434fbc41bf6cf75af480ab2fbb9c74b39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:12:35.774597    9032 start.go:360] acquireMachinesLock for ha-331000: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0908 11:12:35.775275    9032 start.go:364] duration metric: took 150.4µs to acquireMachinesLock for "ha-331000"
	I0908 11:12:35.775275    9032 start.go:93] Provisioning new machine with config: &{Name:ha-331000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.34.0 ClusterName:ha-331000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0908 11:12:35.775275    9032 start.go:125] createHost starting for "" (driver="hyperv")
	I0908 11:12:35.779110    9032 out.go:252] * Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0908 11:12:35.780306    9032 start.go:159] libmachine.API.Create for "ha-331000" (driver="hyperv")
	I0908 11:12:35.780306    9032 client.go:168] LocalClient.Create starting
	I0908 11:12:35.780482    9032 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0908 11:12:35.781329    9032 main.go:141] libmachine: Decoding PEM data...
	I0908 11:12:35.781329    9032 main.go:141] libmachine: Parsing certificate...
	I0908 11:12:35.781538    9032 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0908 11:12:35.781538    9032 main.go:141] libmachine: Decoding PEM data...
	I0908 11:12:35.781538    9032 main.go:141] libmachine: Parsing certificate...
	I0908 11:12:35.782068    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0908 11:12:37.794131    9032 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0908 11:12:37.794131    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:12:37.794214    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0908 11:12:39.499693    9032 main.go:141] libmachine: [stdout =====>] : False
	
	I0908 11:12:39.499918    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:12:39.500011    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0908 11:12:40.973413    9032 main.go:141] libmachine: [stdout =====>] : True
	
	I0908 11:12:40.973413    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:12:40.973656    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0908 11:12:44.568633    9032 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0908 11:12:44.569720    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:12:44.572135    9032 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso...
	I0908 11:12:45.147882    9032 main.go:141] libmachine: Creating SSH key...
	I0908 11:12:45.210329    9032 main.go:141] libmachine: Creating VM...
	I0908 11:12:45.210329    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0908 11:12:47.920599    9032 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0908 11:12:47.921044    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:12:47.921044    9032 main.go:141] libmachine: Using switch "Default Switch"
	I0908 11:12:47.921044    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0908 11:12:49.632652    9032 main.go:141] libmachine: [stdout =====>] : True
	
	I0908 11:12:49.633427    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:12:49.633427    9032 main.go:141] libmachine: Creating VHD
	I0908 11:12:49.633427    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0908 11:12:53.113222    9032 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 9F7591C7-D83B-4330-B73D-372ADE94B7E3
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0908 11:12:53.113848    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:12:53.113848    9032 main.go:141] libmachine: Writing magic tar header
	I0908 11:12:53.113848    9032 main.go:141] libmachine: Writing SSH key tar header
	I0908 11:12:53.129508    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0908 11:12:56.218026    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:12:56.219009    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:12:56.219252    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\disk.vhd' -SizeBytes 20000MB
	I0908 11:12:58.649027    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:12:58.649027    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:12:58.649814    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-331000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000' -SwitchName 'Default Switch' -MemoryStartupBytes 3072MB
	I0908 11:13:02.127811    9032 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-331000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0908 11:13:02.128517    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:02.128517    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-331000 -DynamicMemoryEnabled $false
	I0908 11:13:04.284275    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:13:04.284988    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:04.284988    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-331000 -Count 2
	I0908 11:13:06.363761    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:13:06.363761    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:06.363761    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-331000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\boot2docker.iso'
	I0908 11:13:08.923348    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:13:08.923988    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:08.924092    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-331000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\disk.vhd'
	I0908 11:13:11.480526    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:13:11.481619    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:11.481619    9032 main.go:141] libmachine: Starting VM...
	I0908 11:13:11.481672    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-331000
	I0908 11:13:14.644207    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:13:14.644207    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:14.645214    9032 main.go:141] libmachine: Waiting for host to start...
	I0908 11:13:14.645214    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:13:16.794568    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:13:16.794923    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:16.794923    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:13:19.268251    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:13:19.269387    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:20.270231    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:13:22.369013    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:13:22.369013    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:22.369013    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:13:24.853789    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:13:24.853789    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:25.854667    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:13:28.072373    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:13:28.073317    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:28.073404    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:13:30.559623    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:13:30.559661    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:31.560086    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:13:33.724043    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:13:33.724043    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:33.725005    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:13:36.226677    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:13:36.226677    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:37.227933    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:13:39.430834    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:13:39.430834    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:39.430913    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:13:41.936590    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:13:41.936590    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:41.936590    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:13:44.060700    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:13:44.060700    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:44.060700    9032 machine.go:93] provisionDockerMachine start ...
	I0908 11:13:44.060700    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:13:46.178026    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:13:46.179157    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:46.179268    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:13:48.627344    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:13:48.627805    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:48.633487    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:13:48.650210    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.59.73 22 <nil> <nil>}
	I0908 11:13:48.650210    9032 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 11:13:48.784728    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0908 11:13:48.784728    9032 buildroot.go:166] provisioning hostname "ha-331000"
	I0908 11:13:48.784969    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:13:50.777190    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:13:50.778228    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:50.778228    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:13:53.138814    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:13:53.139060    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:53.144255    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:13:53.144834    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.59.73 22 <nil> <nil>}
	I0908 11:13:53.144834    9032 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-331000 && echo "ha-331000" | sudo tee /etc/hostname
	I0908 11:13:53.304213    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-331000
	
	I0908 11:13:53.304213    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:13:55.327227    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:13:55.327496    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:55.327496    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:13:57.748087    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:13:57.748087    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:57.754180    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:13:57.754180    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.59.73 22 <nil> <nil>}
	I0908 11:13:57.754717    9032 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-331000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-331000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-331000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 11:13:57.909311    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 11:13:57.909311    9032 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0908 11:13:57.909311    9032 buildroot.go:174] setting up certificates
	I0908 11:13:57.909311    9032 provision.go:84] configureAuth start
	I0908 11:13:57.909909    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:13:59.883485    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:13:59.883485    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:13:59.884028    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:14:02.341169    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:14:02.341793    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:02.341892    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:14:04.364727    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:14:04.364727    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:04.364973    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:14:06.784973    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:14:06.785922    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:06.786086    9032 provision.go:143] copyHostCerts
	I0908 11:14:06.786250    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0908 11:14:06.786700    9032 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0908 11:14:06.786782    9032 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0908 11:14:06.787245    9032 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0908 11:14:06.788950    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0908 11:14:06.789369    9032 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0908 11:14:06.789369    9032 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0908 11:14:06.789744    9032 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0908 11:14:06.790798    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0908 11:14:06.790798    9032 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0908 11:14:06.790798    9032 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0908 11:14:06.791625    9032 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1671 bytes)
	I0908 11:14:06.792987    9032 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-331000 san=[127.0.0.1 172.20.59.73 ha-331000 localhost minikube]
	I0908 11:14:06.981248    9032 provision.go:177] copyRemoteCerts
	I0908 11:14:06.990498    9032 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 11:14:06.991519    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:14:08.990439    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:14:08.990439    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:08.990439    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:14:11.512635    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:14:11.512635    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:11.513413    9032 sshutil.go:53] new ssh client: &{IP:172.20.59.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\id_rsa Username:docker}
	I0908 11:14:11.633154    9032 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6425979s)
	I0908 11:14:11.633154    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0908 11:14:11.633699    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 11:14:11.683791    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0908 11:14:11.683791    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0908 11:14:11.737610    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0908 11:14:11.737885    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0908 11:14:11.796087    9032 provision.go:87] duration metric: took 13.8866022s to configureAuth
	I0908 11:14:11.796200    9032 buildroot.go:189] setting minikube options for container-runtime
	I0908 11:14:11.796903    9032 config.go:182] Loaded profile config "ha-331000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 11:14:11.797082    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:14:13.938181    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:14:13.938181    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:13.938181    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:14:16.302261    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:14:16.302261    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:16.308708    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:14:16.308882    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.59.73 22 <nil> <nil>}
	I0908 11:14:16.308882    9032 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0908 11:14:16.441323    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0908 11:14:16.441370    9032 buildroot.go:70] root file system type: tmpfs
	I0908 11:14:16.441607    9032 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0908 11:14:16.441758    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:14:18.487696    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:14:18.487696    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:18.488523    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:14:20.936929    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:14:20.937053    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:20.942186    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:14:20.942908    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.59.73 22 <nil> <nil>}
	I0908 11:14:20.943511    9032 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0908 11:14:21.099174    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0908 11:14:21.099329    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:14:23.141580    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:14:23.141669    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:23.141669    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:14:25.544188    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:14:25.544188    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:25.551246    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:14:25.551246    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.59.73 22 <nil> <nil>}
	I0908 11:14:25.551246    9032 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0908 11:14:26.900965    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I0908 11:14:26.900965    9032 machine.go:96] duration metric: took 42.8397288s to provisionDockerMachine
	I0908 11:14:26.900965    9032 client.go:171] duration metric: took 1m51.1192697s to LocalClient.Create
	I0908 11:14:26.900965    9032 start.go:167] duration metric: took 1m51.1192697s to libmachine.API.Create "ha-331000"
	I0908 11:14:26.900965    9032 start.go:293] postStartSetup for "ha-331000" (driver="hyperv")
	I0908 11:14:26.900965    9032 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 11:14:26.913952    9032 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 11:14:26.913952    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:14:29.028020    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:14:29.028232    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:29.028312    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:14:31.462100    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:14:31.462100    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:31.462646    9032 sshutil.go:53] new ssh client: &{IP:172.20.59.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\id_rsa Username:docker}
	I0908 11:14:31.566214    9032 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6522042s)
	I0908 11:14:31.577383    9032 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 11:14:31.584290    9032 info.go:137] Remote host: Buildroot 2025.02
	I0908 11:14:31.584290    9032 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0908 11:14:31.584290    9032 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0908 11:14:31.585666    9032 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> 116282.pem in /etc/ssl/certs
	I0908 11:14:31.585733    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> /etc/ssl/certs/116282.pem
	I0908 11:14:31.595280    9032 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 11:14:31.615921    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem --> /etc/ssl/certs/116282.pem (1708 bytes)
	I0908 11:14:31.672755    9032 start.go:296] duration metric: took 4.7717302s for postStartSetup
	I0908 11:14:31.676077    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:14:33.710199    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:14:33.710847    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:33.710847    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:14:36.133027    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:14:36.133962    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:36.134121    9032 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\config.json ...
	I0908 11:14:36.136753    9032 start.go:128] duration metric: took 2m0.3599727s to createHost
	I0908 11:14:36.136753    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:14:38.193042    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:14:38.194125    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:38.194125    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:14:40.655340    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:14:40.656290    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:40.662420    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:14:40.663140    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.59.73 22 <nil> <nil>}
	I0908 11:14:40.663140    9032 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0908 11:14:40.786102    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757330080.780655954
	
	I0908 11:14:40.786102    9032 fix.go:216] guest clock: 1757330080.780655954
	I0908 11:14:40.786176    9032 fix.go:229] Guest: 2025-09-08 11:14:40.780655954 +0000 UTC Remote: 2025-09-08 11:14:36.1367531 +0000 UTC m=+125.870517401 (delta=4.643902854s)
	I0908 11:14:40.786244    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:14:42.832697    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:14:42.833212    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:42.833212    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:14:45.235582    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:14:45.235781    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:45.240952    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:14:45.241720    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.59.73 22 <nil> <nil>}
	I0908 11:14:45.241720    9032 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1757330080
	I0908 11:14:45.385170    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Sep  8 11:14:40 UTC 2025
	
	I0908 11:14:45.385170    9032 fix.go:236] clock set: Mon Sep  8 11:14:40 UTC 2025
	 (err=<nil>)
	I0908 11:14:45.385170    9032 start.go:83] releasing machines lock for "ha-331000", held for 2m9.6082739s
	I0908 11:14:45.385170    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:14:47.356280    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:14:47.356280    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:47.356598    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:14:49.796446    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:14:49.796446    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:49.800867    9032 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0908 11:14:49.800953    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:14:49.809832    9032 ssh_runner.go:195] Run: cat /version.json
	I0908 11:14:49.809832    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:14:51.902233    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:14:51.903315    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:51.902233    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:14:51.903368    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:51.903368    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:14:51.903585    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:14:54.396627    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:14:54.396627    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:54.397005    9032 sshutil.go:53] new ssh client: &{IP:172.20.59.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\id_rsa Username:docker}
	I0908 11:14:54.459348    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:14:54.459348    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:14:54.460488    9032 sshutil.go:53] new ssh client: &{IP:172.20.59.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\id_rsa Username:docker}
	I0908 11:14:54.497508    9032 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.6964582s)
	W0908 11:14:54.497705    9032 start.go:868] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0908 11:14:54.563396    9032 ssh_runner.go:235] Completed: cat /version.json: (4.7535048s)
	I0908 11:14:54.574840    9032 ssh_runner.go:195] Run: systemctl --version
	I0908 11:14:54.596926    9032 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0908 11:14:54.606771    9032 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	W0908 11:14:54.614314    9032 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0908 11:14:54.614314    9032 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0908 11:14:54.618587    9032 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 11:14:54.654199    9032 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0908 11:14:54.654199    9032 start.go:495] detecting cgroup driver to use...
	I0908 11:14:54.654654    9032 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 11:14:54.705501    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0908 11:14:54.747481    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0908 11:14:54.773239    9032 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0908 11:14:54.783108    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0908 11:14:54.817477    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 11:14:54.851996    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0908 11:14:54.882480    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 11:14:54.913953    9032 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 11:14:54.946092    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0908 11:14:54.978328    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0908 11:14:55.009026    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0908 11:14:55.064253    9032 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 11:14:55.083335    9032 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0908 11:14:55.094919    9032 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0908 11:14:55.127350    9032 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 11:14:55.154329    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:14:55.368501    9032 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0908 11:14:55.429500    9032 start.go:495] detecting cgroup driver to use...
	I0908 11:14:55.442384    9032 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0908 11:14:55.482716    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 11:14:55.513921    9032 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 11:14:55.552707    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 11:14:55.587291    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 11:14:55.623993    9032 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0908 11:14:55.685999    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 11:14:55.711198    9032 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 11:14:55.761054    9032 ssh_runner.go:195] Run: which cri-dockerd
	I0908 11:14:55.778018    9032 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0908 11:14:55.797312    9032 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0908 11:14:55.846262    9032 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0908 11:14:56.059445    9032 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0908 11:14:56.258500    9032 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0908 11:14:56.258500    9032 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0908 11:14:56.305704    9032 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0908 11:14:56.342289    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:14:56.554618    9032 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0908 11:14:57.251201    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 11:14:57.288326    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0908 11:14:57.322325    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0908 11:14:57.355090    9032 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0908 11:14:57.596003    9032 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0908 11:14:57.812272    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:14:58.021044    9032 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0908 11:14:58.081993    9032 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0908 11:14:58.115113    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:14:58.360825    9032 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0908 11:14:58.524018    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0908 11:14:58.553032    9032 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0908 11:14:58.563670    9032 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0908 11:14:58.572810    9032 start.go:563] Will wait 60s for crictl version
	I0908 11:14:58.584409    9032 ssh_runner.go:195] Run: which crictl
	I0908 11:14:58.601621    9032 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 11:14:58.662009    9032 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0908 11:14:58.672918    9032 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0908 11:14:58.717816    9032 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0908 11:14:58.753604    9032 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0908 11:14:58.753716    9032 ip.go:180] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0908 11:14:58.758585    9032 ip.go:194] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0908 11:14:58.758585    9032 ip.go:194] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0908 11:14:58.758585    9032 ip.go:189] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0908 11:14:58.758585    9032 ip.go:215] Found interface: {Index:17 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4f:5e:c2 Flags:up|broadcast|multicast|running}
	I0908 11:14:58.760889    9032 ip.go:218] interface addr: fe80::a43d:dd17:5b4e:e872/64
	I0908 11:14:58.760889    9032 ip.go:218] interface addr: 172.20.48.1/20
	I0908 11:14:58.769265    9032 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0908 11:14:58.776058    9032 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 11:14:58.811491    9032 kubeadm.go:875] updating cluster {Name:ha-331000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0
ClusterName:ha-331000 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.59.73 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 11:14:58.812276    9032 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0908 11:14:58.821216    9032 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0908 11:14:58.845022    9032 docker.go:691] Got preloaded images: 
	I0908 11:14:58.845094    9032 docker.go:697] registry.k8s.io/kube-apiserver:v1.34.0 wasn't preloaded
	I0908 11:14:58.857533    9032 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0908 11:14:58.886072    9032 ssh_runner.go:195] Run: which lz4
	I0908 11:14:58.893676    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0908 11:14:58.904310    9032 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0908 11:14:58.911863    9032 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0908 11:14:58.911974    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (353447550 bytes)
	I0908 11:15:00.932916    9032 docker.go:655] duration metric: took 2.0388919s to copy over tarball
	I0908 11:15:00.943143    9032 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0908 11:15:09.925206    9032 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.9818945s)
	I0908 11:15:09.925287    9032 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0908 11:15:09.988549    9032 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0908 11:15:10.007523    9032 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2632 bytes)
	I0908 11:15:10.057029    9032 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0908 11:15:10.094996    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:15:10.329720    9032 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0908 11:15:11.785241    9032 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4553761s)
	I0908 11:15:11.793894    9032 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0908 11:15:11.825346    9032 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0908 11:15:11.825489    9032 cache_images.go:85] Images are preloaded, skipping loading
	I0908 11:15:11.825489    9032 kubeadm.go:926] updating node { 172.20.59.73 8443 v1.34.0 docker true true} ...
	I0908 11:15:11.825772    9032 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-331000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.59.73
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-331000 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 11:15:11.835298    9032 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0908 11:15:11.904022    9032 cni.go:84] Creating CNI manager for ""
	I0908 11:15:11.904128    9032 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0908 11:15:11.904184    9032 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 11:15:11.904184    9032 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.59.73 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-331000 NodeName:ha-331000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.59.73"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.59.73 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 11:15:11.904528    9032 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.59.73
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-331000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.20.59.73"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.59.73"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 11:15:11.904614    9032 kube-vip.go:115] generating kube-vip config ...
	I0908 11:15:11.915931    9032 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0908 11:15:11.947488    9032 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0908 11:15:11.947702    9032 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.20.63.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0908 11:15:11.959081    9032 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 11:15:11.978893    9032 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 11:15:11.989559    9032 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0908 11:15:12.007474    9032 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0908 11:15:12.049978    9032 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 11:15:12.082591    9032 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I0908 11:15:12.118433    9032 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0908 11:15:12.172639    9032 ssh_runner.go:195] Run: grep 172.20.63.254	control-plane.minikube.internal$ /etc/hosts
	I0908 11:15:12.179383    9032 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.63.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 11:15:12.220771    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:15:12.472069    9032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 11:15:12.543572    9032 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000 for IP: 172.20.59.73
	I0908 11:15:12.543572    9032 certs.go:194] generating shared ca certs ...
	I0908 11:15:12.543673    9032 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:15:12.544721    9032 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0908 11:15:12.545123    9032 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0908 11:15:12.545337    9032 certs.go:256] generating profile certs ...
	I0908 11:15:12.545887    9032 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\client.key
	I0908 11:15:12.545887    9032 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\client.crt with IP's: []
	I0908 11:15:12.661867    9032 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\client.crt ...
	I0908 11:15:12.661867    9032 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\client.crt: {Name:mk982cb9fe6c7582dc197ee82418c9baa0dde8ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:15:12.664225    9032 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\client.key ...
	I0908 11:15:12.664225    9032 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\client.key: {Name:mk58ff292202a11ef18a9e3edabff73fc83409c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:15:12.665638    9032 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key.4e8026d4
	I0908 11:15:12.665638    9032 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt.4e8026d4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.59.73 172.20.63.254]
	I0908 11:15:13.264316    9032 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt.4e8026d4 ...
	I0908 11:15:13.264316    9032 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt.4e8026d4: {Name:mke834e6e230ac291685eba75c0c27404a652f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:15:13.265261    9032 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key.4e8026d4 ...
	I0908 11:15:13.265261    9032 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key.4e8026d4: {Name:mk68689b6356cc39a769c2bbfea500a7d7e99a3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:15:13.267246    9032 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt.4e8026d4 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt
	I0908 11:15:13.281564    9032 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key.4e8026d4 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key
	I0908 11:15:13.283559    9032 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.key
	I0908 11:15:13.283559    9032 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.crt with IP's: []
	I0908 11:15:13.854056    9032 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.crt ...
	I0908 11:15:13.854056    9032 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.crt: {Name:mkcef962eee945cd174f72530a740f24f54057db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:15:13.855744    9032 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.key ...
	I0908 11:15:13.855744    9032 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.key: {Name:mkfd532185dbd2c791d00c24d248d2ec16ac09b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:15:13.856837    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0908 11:15:13.857366    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0908 11:15:13.857545    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0908 11:15:13.857708    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0908 11:15:13.857708    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0908 11:15:13.857708    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0908 11:15:13.858243    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0908 11:15:13.870973    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0908 11:15:13.871968    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem (1338 bytes)
	W0908 11:15:13.872459    9032 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628_empty.pem, impossibly tiny 0 bytes
	I0908 11:15:13.872625    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0908 11:15:13.872625    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0908 11:15:13.873208    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0908 11:15:13.873371    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1671 bytes)
	I0908 11:15:13.874171    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem (1708 bytes)
	I0908 11:15:13.874475    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> /usr/share/ca-certificates/116282.pem
	I0908 11:15:13.874682    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:15:13.874682    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem -> /usr/share/ca-certificates/11628.pem
	I0908 11:15:13.875350    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 11:15:13.928682    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0908 11:15:13.981908    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 11:15:14.029791    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0908 11:15:14.086512    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0908 11:15:14.137945    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 11:15:14.195311    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 11:15:14.247472    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 11:15:14.300731    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem --> /usr/share/ca-certificates/116282.pem (1708 bytes)
	I0908 11:15:14.350818    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 11:15:14.400248    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem --> /usr/share/ca-certificates/11628.pem (1338 bytes)
	I0908 11:15:14.450020    9032 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 11:15:14.495025    9032 ssh_runner.go:195] Run: openssl version
	I0908 11:15:14.515574    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11628.pem && ln -fs /usr/share/ca-certificates/11628.pem /etc/ssl/certs/11628.pem"
	I0908 11:15:14.551261    9032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11628.pem
	I0908 11:15:14.557812    9032 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 10:54 /usr/share/ca-certificates/11628.pem
	I0908 11:15:14.569809    9032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11628.pem
	I0908 11:15:14.593412    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11628.pem /etc/ssl/certs/51391683.0"
	I0908 11:15:14.625859    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116282.pem && ln -fs /usr/share/ca-certificates/116282.pem /etc/ssl/certs/116282.pem"
	I0908 11:15:14.658385    9032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116282.pem
	I0908 11:15:14.666401    9032 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 10:54 /usr/share/ca-certificates/116282.pem
	I0908 11:15:14.678800    9032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116282.pem
	I0908 11:15:14.701256    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116282.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 11:15:14.734830    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 11:15:14.770278    9032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:15:14.778581    9032 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:15:14.789528    9032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:15:14.808915    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 11:15:14.844204    9032 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 11:15:14.851476    9032 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0908 11:15:14.851476    9032 kubeadm.go:392] StartCluster: {Name:ha-331000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:ha-331000 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.59.73 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:15:14.862672    9032 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0908 11:15:14.901218    9032 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 11:15:14.937643    9032 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 11:15:14.966656    9032 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 11:15:14.983879    9032 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 11:15:14.983879    9032 kubeadm.go:157] found existing configuration files:
	
	I0908 11:15:14.997584    9032 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 11:15:15.018336    9032 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 11:15:15.029321    9032 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 11:15:15.062987    9032 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 11:15:15.083778    9032 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 11:15:15.094559    9032 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 11:15:15.124200    9032 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 11:15:15.143296    9032 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 11:15:15.154750    9032 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 11:15:15.186008    9032 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 11:15:15.205299    9032 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 11:15:15.216079    9032 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 11:15:15.236497    9032 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0908 11:15:15.461403    9032 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0908 11:15:35.211083    9032 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0908 11:15:35.211083    9032 kubeadm.go:310] [preflight] Running pre-flight checks
	I0908 11:15:35.211083    9032 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0908 11:15:35.211083    9032 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0908 11:15:35.211083    9032 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0908 11:15:35.211083    9032 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0908 11:15:35.214086    9032 out.go:252]   - Generating certificates and keys ...
	I0908 11:15:35.214086    9032 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0908 11:15:35.214086    9032 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0908 11:15:35.214086    9032 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0908 11:15:35.215089    9032 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0908 11:15:35.215089    9032 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0908 11:15:35.215089    9032 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0908 11:15:35.215089    9032 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0908 11:15:35.215089    9032 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-331000 localhost] and IPs [172.20.59.73 127.0.0.1 ::1]
	I0908 11:15:35.215089    9032 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0908 11:15:35.216089    9032 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-331000 localhost] and IPs [172.20.59.73 127.0.0.1 ::1]
	I0908 11:15:35.216089    9032 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0908 11:15:35.216089    9032 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0908 11:15:35.216089    9032 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0908 11:15:35.216089    9032 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0908 11:15:35.216089    9032 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0908 11:15:35.216089    9032 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0908 11:15:35.216089    9032 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0908 11:15:35.217072    9032 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0908 11:15:35.217072    9032 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0908 11:15:35.217072    9032 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0908 11:15:35.217072    9032 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0908 11:15:35.221071    9032 out.go:252]   - Booting up control plane ...
	I0908 11:15:35.221071    9032 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0908 11:15:35.221071    9032 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0908 11:15:35.221071    9032 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0908 11:15:35.221071    9032 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0908 11:15:35.222078    9032 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0908 11:15:35.222078    9032 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0908 11:15:35.222078    9032 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0908 11:15:35.222078    9032 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0908 11:15:35.222078    9032 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0908 11:15:35.223103    9032 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0908 11:15:35.223103    9032 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002172681s
	I0908 11:15:35.223103    9032 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0908 11:15:35.223103    9032 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://172.20.59.73:8443/livez
	I0908 11:15:35.223103    9032 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0908 11:15:35.224110    9032 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0908 11:15:35.224110    9032 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 3.625328387s
	I0908 11:15:35.224110    9032 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 5.486846989s
	I0908 11:15:35.224110    9032 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 12.284673563s
	I0908 11:15:35.224110    9032 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0908 11:15:35.225070    9032 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0908 11:15:35.225070    9032 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0908 11:15:35.225070    9032 kubeadm.go:310] [mark-control-plane] Marking the node ha-331000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0908 11:15:35.225070    9032 kubeadm.go:310] [bootstrap-token] Using token: wqmjmr.2qioywh307t3wcmb
	I0908 11:15:35.228084    9032 out.go:252]   - Configuring RBAC rules ...
	I0908 11:15:35.229173    9032 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0908 11:15:35.229173    9032 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0908 11:15:35.229173    9032 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0908 11:15:35.230079    9032 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0908 11:15:35.230079    9032 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0908 11:15:35.230079    9032 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0908 11:15:35.230079    9032 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0908 11:15:35.230079    9032 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0908 11:15:35.231082    9032 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0908 11:15:35.231082    9032 kubeadm.go:310] 
	I0908 11:15:35.231082    9032 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0908 11:15:35.231082    9032 kubeadm.go:310] 
	I0908 11:15:35.231082    9032 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0908 11:15:35.231082    9032 kubeadm.go:310] 
	I0908 11:15:35.231082    9032 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0908 11:15:35.231082    9032 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0908 11:15:35.231082    9032 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0908 11:15:35.231082    9032 kubeadm.go:310] 
	I0908 11:15:35.231082    9032 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0908 11:15:35.231082    9032 kubeadm.go:310] 
	I0908 11:15:35.232113    9032 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0908 11:15:35.232113    9032 kubeadm.go:310] 
	I0908 11:15:35.232113    9032 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0908 11:15:35.232113    9032 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0908 11:15:35.232113    9032 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0908 11:15:35.232113    9032 kubeadm.go:310] 
	I0908 11:15:35.232113    9032 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0908 11:15:35.232113    9032 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0908 11:15:35.232113    9032 kubeadm.go:310] 
	I0908 11:15:35.233114    9032 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wqmjmr.2qioywh307t3wcmb \
	I0908 11:15:35.237081    9032 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6f0ed86d1fb618064431da971fb4f5228ff7cd998cb290916759978661fe58e6 \
	I0908 11:15:35.237081    9032 kubeadm.go:310] 	--control-plane 
	I0908 11:15:35.237081    9032 kubeadm.go:310] 
	I0908 11:15:35.237081    9032 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0908 11:15:35.237081    9032 kubeadm.go:310] 
	I0908 11:15:35.237081    9032 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wqmjmr.2qioywh307t3wcmb \
	I0908 11:15:35.237081    9032 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6f0ed86d1fb618064431da971fb4f5228ff7cd998cb290916759978661fe58e6 
	I0908 11:15:35.238163    9032 cni.go:84] Creating CNI manager for ""
	I0908 11:15:35.238163    9032 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0908 11:15:35.246087    9032 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0908 11:15:35.260092    9032 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0908 11:15:35.270116    9032 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0908 11:15:35.270189    9032 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0908 11:15:35.321105    9032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0908 11:15:35.762326    9032 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 11:15:35.777996    9032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:15:35.780998    9032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-331000 minikube.k8s.io/updated_at=2025_09_08T11_15_35_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64 minikube.k8s.io/name=ha-331000 minikube.k8s.io/primary=true
	I0908 11:15:35.805782    9032 ops.go:34] apiserver oom_adj: -16
	I0908 11:15:36.105994    9032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:15:36.330346    9032 kubeadm.go:1105] duration metric: took 567.8222ms to wait for elevateKubeSystemPrivileges
	I0908 11:15:36.330476    9032 kubeadm.go:394] duration metric: took 21.4787318s to StartCluster
	I0908 11:15:36.330476    9032 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:15:36.330476    9032 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0908 11:15:36.332234    9032 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:15:36.333327    9032 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0908 11:15:36.333327    9032 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.20.59.73 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0908 11:15:36.334013    9032 start.go:241] waiting for startup goroutines ...
	I0908 11:15:36.333954    9032 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 11:15:36.334108    9032 addons.go:69] Setting storage-provisioner=true in profile "ha-331000"
	I0908 11:15:36.334108    9032 addons.go:69] Setting default-storageclass=true in profile "ha-331000"
	I0908 11:15:36.334300    9032 addons.go:238] Setting addon storage-provisioner=true in "ha-331000"
	I0908 11:15:36.334460    9032 config.go:182] Loaded profile config "ha-331000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 11:15:36.334460    9032 host.go:66] Checking if "ha-331000" exists ...
	I0908 11:15:36.334460    9032 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-331000"
	I0908 11:15:36.335489    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:15:36.335951    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:15:36.568981    9032 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0908 11:15:37.178804    9032 start.go:976] {"host.minikube.internal": 172.20.48.1} host record injected into CoreDNS's ConfigMap
	I0908 11:15:38.640645    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:15:38.641032    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:15:38.644728    9032 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 11:15:38.647714    9032 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 11:15:38.647882    9032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 11:15:38.647944    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:15:38.872418    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:15:38.872418    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:15:38.874255    9032 kapi.go:59] client config for ha-331000: &rest.Config{Host:"https://172.20.63.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-331000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-331000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2a967c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0908 11:15:38.875929    9032 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0908 11:15:38.875929    9032 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0908 11:15:38.875929    9032 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0908 11:15:38.875929    9032 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0908 11:15:38.875929    9032 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0908 11:15:38.875929    9032 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0908 11:15:38.876476    9032 addons.go:238] Setting addon default-storageclass=true in "ha-331000"
	I0908 11:15:38.876530    9032 host.go:66] Checking if "ha-331000" exists ...
	I0908 11:15:38.877694    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:15:41.204331    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:15:41.204331    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:15:41.204331    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:15:41.279534    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:15:41.279534    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:15:41.280476    9032 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 11:15:41.280476    9032 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 11:15:41.280603    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:15:43.559155    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:15:43.559713    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:15:43.559713    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:15:43.976897    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:15:43.977673    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:15:43.978132    9032 sshutil.go:53] new ssh client: &{IP:172.20.59.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\id_rsa Username:docker}
	I0908 11:15:44.122485    9032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 11:15:45.368004    9032 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.2455032s)
	I0908 11:15:46.185317    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:15:46.185317    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:15:46.185317    9032 sshutil.go:53] new ssh client: &{IP:172.20.59.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\id_rsa Username:docker}
	I0908 11:15:46.315402    9032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 11:15:46.506335    9032 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0908 11:15:46.513172    9032 addons.go:514] duration metric: took 10.1797173s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0908 11:15:46.513172    9032 start.go:246] waiting for cluster config update ...
	I0908 11:15:46.513172    9032 start.go:255] writing updated cluster config ...
	I0908 11:15:46.518151    9032 out.go:203] 
	I0908 11:15:46.534491    9032 config.go:182] Loaded profile config "ha-331000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 11:15:46.534662    9032 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\config.json ...
	I0908 11:15:46.539780    9032 out.go:179] * Starting "ha-331000-m02" control-plane node in "ha-331000" cluster
	I0908 11:15:46.543589    9032 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0908 11:15:46.543589    9032 cache.go:58] Caching tarball of preloaded images
	I0908 11:15:46.543589    9032 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0908 11:15:46.543589    9032 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0908 11:15:46.544508    9032 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\config.json ...
	I0908 11:15:46.550848    9032 start.go:360] acquireMachinesLock for ha-331000-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0908 11:15:46.551704    9032 start.go:364] duration metric: took 856.3µs to acquireMachinesLock for "ha-331000-m02"
	I0908 11:15:46.551849    9032 start.go:93] Provisioning new machine with config: &{Name:ha-331000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.34.0 ClusterName:ha-331000 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.59.73 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:
0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0908 11:15:46.551849    9032 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0908 11:15:46.558882    9032 out.go:252] * Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0908 11:15:46.558882    9032 start.go:159] libmachine.API.Create for "ha-331000" (driver="hyperv")
	I0908 11:15:46.559481    9032 client.go:168] LocalClient.Create starting
	I0908 11:15:46.559838    9032 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0908 11:15:46.559838    9032 main.go:141] libmachine: Decoding PEM data...
	I0908 11:15:46.559838    9032 main.go:141] libmachine: Parsing certificate...
	I0908 11:15:46.560563    9032 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0908 11:15:46.560790    9032 main.go:141] libmachine: Decoding PEM data...
	I0908 11:15:46.560790    9032 main.go:141] libmachine: Parsing certificate...
	I0908 11:15:46.560790    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0908 11:15:48.428006    9032 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0908 11:15:48.428006    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:15:48.429001    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0908 11:15:50.194992    9032 main.go:141] libmachine: [stdout =====>] : False
	
	I0908 11:15:50.196079    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:15:50.196079    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0908 11:15:51.709546    9032 main.go:141] libmachine: [stdout =====>] : True
	
	I0908 11:15:51.710364    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:15:51.710364    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0908 11:15:55.207336    9032 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0908 11:15:55.207336    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:15:55.210529    9032 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso...
	I0908 11:15:55.840430    9032 main.go:141] libmachine: Creating SSH key...
	I0908 11:15:55.949265    9032 main.go:141] libmachine: Creating VM...
	I0908 11:15:55.949265    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0908 11:15:58.787680    9032 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0908 11:15:58.787680    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:15:58.788466    9032 main.go:141] libmachine: Using switch "Default Switch"
	I0908 11:15:58.788532    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0908 11:16:00.670147    9032 main.go:141] libmachine: [stdout =====>] : True
	
	I0908 11:16:00.671089    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:00.671089    9032 main.go:141] libmachine: Creating VHD
	I0908 11:16:00.671089    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0908 11:16:04.306654    9032 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5393F48A-195E-4D61-B4F5-BAA199D68F00
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0908 11:16:04.307258    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:04.307258    9032 main.go:141] libmachine: Writing magic tar header
	I0908 11:16:04.307258    9032 main.go:141] libmachine: Writing SSH key tar header
	I0908 11:16:04.321222    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0908 11:16:07.442033    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:16:07.442033    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:07.442033    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m02\disk.vhd' -SizeBytes 20000MB
	I0908 11:16:09.950684    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:16:09.950684    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:09.951272    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-331000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 3072MB
	I0908 11:16:13.580447    9032 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-331000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0908 11:16:13.581261    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:13.581261    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-331000-m02 -DynamicMemoryEnabled $false
	I0908 11:16:15.784325    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:16:15.785116    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:15.785116    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-331000-m02 -Count 2
	I0908 11:16:17.935594    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:16:17.935766    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:17.935766    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-331000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m02\boot2docker.iso'
	I0908 11:16:20.454138    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:16:20.454138    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:20.454138    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-331000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m02\disk.vhd'
	I0908 11:16:23.099129    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:16:23.099129    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:23.099476    9032 main.go:141] libmachine: Starting VM...
	I0908 11:16:23.099476    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-331000-m02
	I0908 11:16:26.195570    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:16:26.195570    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:26.195570    9032 main.go:141] libmachine: Waiting for host to start...
	I0908 11:16:26.195570    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:16:28.483547    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:16:28.483547    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:28.483547    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:16:31.002912    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:16:31.003160    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:32.003937    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:16:34.137931    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:16:34.138026    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:34.138096    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:16:36.662185    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:16:36.662185    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:37.663362    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:16:39.817092    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:16:39.817092    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:39.817656    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:16:42.335513    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:16:42.335513    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:43.336332    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:16:45.508536    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:16:45.508536    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:45.508774    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:16:48.098654    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:16:48.098654    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:49.099375    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:16:51.267915    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:16:51.268086    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:51.268086    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:16:53.824266    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:16:53.824627    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:53.824627    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:16:55.974500    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:16:55.974881    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:55.974881    9032 machine.go:93] provisionDockerMachine start ...
	I0908 11:16:55.975001    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:16:58.106416    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:16:58.106416    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:16:58.106416    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:17:00.609929    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:17:00.610427    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:00.617152    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:17:00.635974    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.54.101 22 <nil> <nil>}
	I0908 11:17:00.636031    9032 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 11:17:00.772318    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0908 11:17:00.772318    9032 buildroot.go:166] provisioning hostname "ha-331000-m02"
	I0908 11:17:00.772381    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:17:02.856544    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:17:02.856544    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:02.856776    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:17:05.362710    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:17:05.363088    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:05.368676    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:17:05.369256    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.54.101 22 <nil> <nil>}
	I0908 11:17:05.369357    9032 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-331000-m02 && echo "ha-331000-m02" | sudo tee /etc/hostname
	I0908 11:17:05.532364    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-331000-m02
	
	I0908 11:17:05.532511    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:17:07.659030    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:17:07.659628    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:07.659753    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:17:10.109960    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:17:10.109960    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:10.115349    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:17:10.115506    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.54.101 22 <nil> <nil>}
	I0908 11:17:10.115506    9032 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-331000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-331000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-331000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 11:17:10.260836    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 11:17:10.260935    9032 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0908 11:17:10.260935    9032 buildroot.go:174] setting up certificates
	I0908 11:17:10.261019    9032 provision.go:84] configureAuth start
	I0908 11:17:10.261110    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:17:12.383902    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:17:12.383902    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:12.384718    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:17:14.947822    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:17:14.947822    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:14.948189    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:17:17.056997    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:17:17.057977    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:17.058180    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:17:19.520830    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:17:19.521360    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:19.521360    9032 provision.go:143] copyHostCerts
	I0908 11:17:19.521525    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0908 11:17:19.521785    9032 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0908 11:17:19.521785    9032 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0908 11:17:19.522350    9032 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0908 11:17:19.523315    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0908 11:17:19.523315    9032 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0908 11:17:19.523315    9032 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0908 11:17:19.524106    9032 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1671 bytes)
	I0908 11:17:19.525625    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0908 11:17:19.525922    9032 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0908 11:17:19.525922    9032 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0908 11:17:19.526346    9032 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0908 11:17:19.527312    9032 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-331000-m02 san=[127.0.0.1 172.20.54.101 ha-331000-m02 localhost minikube]
	I0908 11:17:19.710288    9032 provision.go:177] copyRemoteCerts
	I0908 11:17:19.722033    9032 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 11:17:19.722196    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:17:21.790613    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:17:21.790771    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:21.790844    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:17:24.326271    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:17:24.326271    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:24.326467    9032 sshutil.go:53] new ssh client: &{IP:172.20.54.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m02\id_rsa Username:docker}
	I0908 11:17:24.432055    9032 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.70989s)
	I0908 11:17:24.432055    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0908 11:17:24.432688    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0908 11:17:24.488203    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0908 11:17:24.488203    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 11:17:24.541286    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0908 11:17:24.541768    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 11:17:24.594172    9032 provision.go:87] duration metric: took 14.332937s to configureAuth
	I0908 11:17:24.594172    9032 buildroot.go:189] setting minikube options for container-runtime
	I0908 11:17:24.595120    9032 config.go:182] Loaded profile config "ha-331000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 11:17:24.595120    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:17:26.691698    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:17:26.691698    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:26.692003    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:17:29.274971    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:17:29.274971    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:29.281534    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:17:29.282268    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.54.101 22 <nil> <nil>}
	I0908 11:17:29.282268    9032 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0908 11:17:29.412282    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0908 11:17:29.412282    9032 buildroot.go:70] root file system type: tmpfs
	I0908 11:17:29.412495    9032 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0908 11:17:29.412587    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:17:31.457090    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:17:31.457710    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:31.457820    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:17:33.955937    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:17:33.955937    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:33.960952    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:17:33.961093    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.54.101 22 <nil> <nil>}
	I0908 11:17:33.961093    9032 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=172.20.59.73"
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0908 11:17:34.129337    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=172.20.59.73
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0908 11:17:34.129337    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:17:36.212244    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:17:36.212244    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:36.212356    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:17:38.704308    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:17:38.705335    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:38.710734    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:17:38.711541    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.54.101 22 <nil> <nil>}
	I0908 11:17:38.711541    9032 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0908 11:17:40.131463    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I0908 11:17:40.131463    9032 machine.go:96] duration metric: took 44.1560267s to provisionDockerMachine
	I0908 11:17:40.131463    9032 client.go:171] duration metric: took 1m53.5705589s to LocalClient.Create
	I0908 11:17:40.131463    9032 start.go:167] duration metric: took 1m53.5711582s to libmachine.API.Create "ha-331000"
	I0908 11:17:40.131463    9032 start.go:293] postStartSetup for "ha-331000-m02" (driver="hyperv")
	I0908 11:17:40.131463    9032 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 11:17:40.144383    9032 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 11:17:40.144383    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:17:42.197302    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:17:42.197302    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:42.197302    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:17:44.648358    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:17:44.648626    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:44.649102    9032 sshutil.go:53] new ssh client: &{IP:172.20.54.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m02\id_rsa Username:docker}
	I0908 11:17:44.760451    9032 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6160103s)
	I0908 11:17:44.773311    9032 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 11:17:44.782522    9032 info.go:137] Remote host: Buildroot 2025.02
	I0908 11:17:44.782637    9032 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0908 11:17:44.783706    9032 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0908 11:17:44.785532    9032 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> 116282.pem in /etc/ssl/certs
	I0908 11:17:44.785700    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> /etc/ssl/certs/116282.pem
	I0908 11:17:44.795092    9032 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 11:17:44.815158    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem --> /etc/ssl/certs/116282.pem (1708 bytes)
	I0908 11:17:44.870331    9032 start.go:296] duration metric: took 4.7388093s for postStartSetup
	I0908 11:17:44.873116    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:17:46.942038    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:17:46.942374    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:46.942374    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:17:49.399186    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:17:49.399362    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:49.399537    9032 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\config.json ...
	I0908 11:17:49.401946    9032 start.go:128] duration metric: took 2m2.8485578s to createHost
	I0908 11:17:49.402026    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:17:51.446698    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:17:51.446698    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:51.446698    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:17:53.907170    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:17:53.907326    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:53.911908    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:17:53.912096    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.54.101 22 <nil> <nil>}
	I0908 11:17:53.912684    9032 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0908 11:17:54.034546    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757330274.053124979
	
	I0908 11:17:54.034546    9032 fix.go:216] guest clock: 1757330274.053124979
	I0908 11:17:54.034546    9032 fix.go:229] Guest: 2025-09-08 11:17:54.053124979 +0000 UTC Remote: 2025-09-08 11:17:49.4019464 +0000 UTC m=+319.133291601 (delta=4.651178579s)
	I0908 11:17:54.034546    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:17:56.082190    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:17:56.082333    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:56.082472    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:17:58.548657    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:17:58.548657    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:17:58.555819    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:17:58.556236    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.54.101 22 <nil> <nil>}
	I0908 11:17:58.556236    9032 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1757330274
	I0908 11:17:58.708912    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Sep  8 11:17:54 UTC 2025
	
	I0908 11:17:58.709025    9032 fix.go:236] clock set: Mon Sep  8 11:17:54 UTC 2025
	 (err=<nil>)
	I0908 11:17:58.709025    9032 start.go:83] releasing machines lock for "ha-331000-m02", held for 2m12.1556651s
	I0908 11:17:58.709139    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:18:00.746479    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:18:00.746788    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:18:00.746846    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:18:03.272758    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:18:03.272758    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:18:03.277184    9032 out.go:179] * Found network options:
	I0908 11:18:03.279945    9032 out.go:179]   - NO_PROXY=172.20.59.73
	W0908 11:18:03.282709    9032 proxy.go:120] fail to check proxy env: Error ip not in block
	I0908 11:18:03.286004    9032 out.go:179]   - NO_PROXY=172.20.59.73
	W0908 11:18:03.288615    9032 proxy.go:120] fail to check proxy env: Error ip not in block
	W0908 11:18:03.289998    9032 proxy.go:120] fail to check proxy env: Error ip not in block
	I0908 11:18:03.293113    9032 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0908 11:18:03.293113    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:18:03.302628    9032 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 11:18:03.302628    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m02 ).state
	I0908 11:18:05.458514    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:18:05.458543    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:18:05.458608    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:18:05.503512    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:18:05.503512    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:18:05.504338    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 11:18:08.166708    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:18:08.166708    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:18:08.167320    9032 sshutil.go:53] new ssh client: &{IP:172.20.54.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m02\id_rsa Username:docker}
	I0908 11:18:08.198773    9032 main.go:141] libmachine: [stdout =====>] : 172.20.54.101
	
	I0908 11:18:08.198773    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:18:08.199226    9032 sshutil.go:53] new ssh client: &{IP:172.20.54.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m02\id_rsa Username:docker}
	I0908 11:18:08.278196    9032 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.975506s)
	W0908 11:18:08.278196    9032 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0908 11:18:08.289190    9032 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9960152s)
	W0908 11:18:08.289190    9032 start.go:868] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0908 11:18:08.291383    9032 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 11:18:08.329838    9032 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0908 11:18:08.329838    9032 start.go:495] detecting cgroup driver to use...
	I0908 11:18:08.330124    9032 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 11:18:08.382551    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W0908 11:18:08.403072    9032 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0908 11:18:08.403072    9032 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0908 11:18:08.421197    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0908 11:18:08.443726    9032 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0908 11:18:08.454897    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0908 11:18:08.489123    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 11:18:08.521304    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0908 11:18:08.553771    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 11:18:08.583398    9032 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 11:18:08.617814    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0908 11:18:08.658425    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0908 11:18:08.691595    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0908 11:18:08.724398    9032 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 11:18:08.744499    9032 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0908 11:18:08.756530    9032 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0908 11:18:08.789533    9032 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 11:18:08.818604    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:18:09.049273    9032 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0908 11:18:09.106516    9032 start.go:495] detecting cgroup driver to use...
	I0908 11:18:09.117746    9032 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0908 11:18:09.157730    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 11:18:09.197435    9032 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 11:18:09.248145    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 11:18:09.285694    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 11:18:09.323813    9032 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0908 11:18:09.393640    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 11:18:09.418705    9032 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 11:18:09.472956    9032 ssh_runner.go:195] Run: which cri-dockerd
	I0908 11:18:09.493765    9032 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0908 11:18:09.520621    9032 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0908 11:18:09.577214    9032 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0908 11:18:09.824772    9032 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0908 11:18:10.049322    9032 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0908 11:18:10.049322    9032 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0908 11:18:10.104825    9032 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0908 11:18:10.145003    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:18:10.404218    9032 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0908 11:18:11.177668    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 11:18:11.223271    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0908 11:18:11.261852    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0908 11:18:11.297586    9032 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0908 11:18:11.543616    9032 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0908 11:18:11.781366    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:18:12.014988    9032 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0908 11:18:12.090183    9032 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0908 11:18:12.126956    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:18:12.360568    9032 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0908 11:18:12.522501    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0908 11:18:12.551210    9032 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0908 11:18:12.562455    9032 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0908 11:18:12.572659    9032 start.go:563] Will wait 60s for crictl version
	I0908 11:18:12.586018    9032 ssh_runner.go:195] Run: which crictl
	I0908 11:18:12.604563    9032 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 11:18:12.667215    9032 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0908 11:18:12.678840    9032 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0908 11:18:12.729551    9032 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0908 11:18:12.768935    9032 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0908 11:18:12.771496    9032 out.go:179]   - env NO_PROXY=172.20.59.73
	I0908 11:18:12.774284    9032 ip.go:180] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0908 11:18:12.778311    9032 ip.go:194] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0908 11:18:12.778311    9032 ip.go:194] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0908 11:18:12.778311    9032 ip.go:189] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0908 11:18:12.778311    9032 ip.go:215] Found interface: {Index:17 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4f:5e:c2 Flags:up|broadcast|multicast|running}
	I0908 11:18:12.781377    9032 ip.go:218] interface addr: fe80::a43d:dd17:5b4e:e872/64
	I0908 11:18:12.781377    9032 ip.go:218] interface addr: 172.20.48.1/20
	I0908 11:18:12.793068    9032 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0908 11:18:12.801373    9032 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 11:18:12.831281    9032 mustload.go:65] Loading cluster: ha-331000
	I0908 11:18:12.832456    9032 config.go:182] Loaded profile config "ha-331000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 11:18:12.833321    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:18:14.898826    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:18:14.898826    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:18:14.899866    9032 host.go:66] Checking if "ha-331000" exists ...
	I0908 11:18:14.900617    9032 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000 for IP: 172.20.54.101
	I0908 11:18:14.900617    9032 certs.go:194] generating shared ca certs ...
	I0908 11:18:14.900617    9032 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:18:14.901417    9032 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0908 11:18:14.901742    9032 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0908 11:18:14.901945    9032 certs.go:256] generating profile certs ...
	I0908 11:18:14.902654    9032 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\client.key
	I0908 11:18:14.902760    9032 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key.755394d3
	I0908 11:18:14.902887    9032 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt.755394d3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.59.73 172.20.54.101 172.20.63.254]
	I0908 11:18:15.091457    9032 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt.755394d3 ...
	I0908 11:18:15.091457    9032 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt.755394d3: {Name:mkc127c97031bee384e7b4182aa0bfd415af1e8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:18:15.093209    9032 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key.755394d3 ...
	I0908 11:18:15.093209    9032 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key.755394d3: {Name:mke63046484a6d72a0a1d9017f58266a707b2dc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:18:15.093728    9032 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt.755394d3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt
	I0908 11:18:15.109935    9032 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key.755394d3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key
	I0908 11:18:15.110698    9032 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.key
	I0908 11:18:15.110698    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0908 11:18:15.110698    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0908 11:18:15.111886    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0908 11:18:15.111954    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0908 11:18:15.112293    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0908 11:18:15.112522    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0908 11:18:15.122917    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0908 11:18:15.123216    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0908 11:18:15.123937    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem (1338 bytes)
	W0908 11:18:15.124494    9032 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628_empty.pem, impossibly tiny 0 bytes
	I0908 11:18:15.124562    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0908 11:18:15.124936    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0908 11:18:15.125293    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0908 11:18:15.125499    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1671 bytes)
	I0908 11:18:15.126367    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem (1708 bytes)
	I0908 11:18:15.126643    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> /usr/share/ca-certificates/116282.pem
	I0908 11:18:15.126643    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:18:15.126643    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem -> /usr/share/ca-certificates/11628.pem
	I0908 11:18:15.127418    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:18:17.252918    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:18:17.252918    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:18:17.252918    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:18:19.743712    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:18:19.743712    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:18:19.744262    9032 sshutil.go:53] new ssh client: &{IP:172.20.59.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\id_rsa Username:docker}
	I0908 11:18:19.846833    9032 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0908 11:18:19.855101    9032 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0908 11:18:19.888229    9032 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0908 11:18:19.897687    9032 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0908 11:18:19.931135    9032 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0908 11:18:19.938454    9032 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0908 11:18:19.974285    9032 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0908 11:18:19.982469    9032 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0908 11:18:20.015436    9032 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0908 11:18:20.023567    9032 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0908 11:18:20.067132    9032 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0908 11:18:20.076365    9032 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0908 11:18:20.099309    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 11:18:20.153382    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0908 11:18:20.204580    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 11:18:20.253344    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0908 11:18:20.300247    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0908 11:18:20.352308    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 11:18:20.401981    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 11:18:20.452229    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 11:18:20.510802    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem --> /usr/share/ca-certificates/116282.pem (1708 bytes)
	I0908 11:18:20.564312    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 11:18:20.619212    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem --> /usr/share/ca-certificates/11628.pem (1338 bytes)
	I0908 11:18:20.669382    9032 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0908 11:18:20.703164    9032 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0908 11:18:20.736216    9032 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0908 11:18:20.768065    9032 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0908 11:18:20.804071    9032 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0908 11:18:20.838958    9032 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0908 11:18:20.873387    9032 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0908 11:18:20.921666    9032 ssh_runner.go:195] Run: openssl version
	I0908 11:18:20.941689    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116282.pem && ln -fs /usr/share/ca-certificates/116282.pem /etc/ssl/certs/116282.pem"
	I0908 11:18:20.974344    9032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116282.pem
	I0908 11:18:20.981631    9032 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 10:54 /usr/share/ca-certificates/116282.pem
	I0908 11:18:20.991012    9032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116282.pem
	I0908 11:18:21.015028    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116282.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 11:18:21.050940    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 11:18:21.097872    9032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:18:21.109589    9032 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:18:21.122221    9032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:18:21.144256    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 11:18:21.181670    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11628.pem && ln -fs /usr/share/ca-certificates/11628.pem /etc/ssl/certs/11628.pem"
	I0908 11:18:21.216154    9032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11628.pem
	I0908 11:18:21.224143    9032 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 10:54 /usr/share/ca-certificates/11628.pem
	I0908 11:18:21.235325    9032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11628.pem
	I0908 11:18:21.256323    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11628.pem /etc/ssl/certs/51391683.0"
	I0908 11:18:21.291948    9032 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 11:18:21.298559    9032 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0908 11:18:21.298559    9032 kubeadm.go:926] updating node {m02 172.20.54.101 8443 v1.34.0 docker true true} ...
	I0908 11:18:21.298559    9032 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-331000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.54.101
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-331000 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 11:18:21.299099    9032 kube-vip.go:115] generating kube-vip config ...
	I0908 11:18:21.309225    9032 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0908 11:18:21.338558    9032 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0908 11:18:21.338999    9032 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.20.63.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0908 11:18:21.350614    9032 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 11:18:21.368141    9032 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.0': No such file or directory
	
	Initiating transfer...
	I0908 11:18:21.379890    9032 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.0
	I0908 11:18:21.403513    9032 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubeadm
	I0908 11:18:21.403705    9032 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubelet
	I0908 11:18:21.403705    9032 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubectl
	I0908 11:18:22.667427    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:18:22.681459    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubectl -> /var/lib/minikube/binaries/v1.34.0/kubectl
	I0908 11:18:22.692424    9032 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubectl
	I0908 11:18:22.698419    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubelet -> /var/lib/minikube/binaries/v1.34.0/kubelet
	I0908 11:18:22.699446    9032 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0/kubectl': No such file or directory
	I0908 11:18:22.699446    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubectl --> /var/lib/minikube/binaries/v1.34.0/kubectl (60559544 bytes)
	I0908 11:18:22.712568    9032 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubelet
	I0908 11:18:22.827611    9032 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0/kubelet': No such file or directory
	I0908 11:18:22.827611    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubelet --> /var/lib/minikube/binaries/v1.34.0/kubelet (59195684 bytes)
	I0908 11:18:23.198236    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubeadm -> /var/lib/minikube/binaries/v1.34.0/kubeadm
	I0908 11:18:23.216567    9032 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubeadm
	I0908 11:18:23.241737    9032 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0/kubeadm': No such file or directory
	I0908 11:18:23.241737    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubeadm --> /var/lib/minikube/binaries/v1.34.0/kubeadm (74027192 bytes)
	I0908 11:18:24.078405    9032 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0908 11:18:24.101187    9032 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0908 11:18:24.138594    9032 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 11:18:24.174666    9032 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0908 11:18:24.224918    9032 ssh_runner.go:195] Run: grep 172.20.63.254	control-plane.minikube.internal$ /etc/hosts
	I0908 11:18:24.232044    9032 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.63.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 11:18:24.268682    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:18:24.501410    9032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 11:18:24.551792    9032 host.go:66] Checking if "ha-331000" exists ...
	I0908 11:18:24.579268    9032 start.go:317] joinCluster: &{Name:ha-331000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clust
erName:ha-331000 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.59.73 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.54.101 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpir
ation:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:18:24.579268    9032 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0908 11:18:24.579976    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:18:26.702495    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:18:26.703564    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:18:26.703564    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:18:29.334198    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:18:29.334198    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:18:29.334890    9032 sshutil.go:53] new ssh client: &{IP:172.20.59.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\id_rsa Username:docker}
	I0908 11:18:29.588815    9032 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0094841s)
	I0908 11:18:29.588815    9032 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.20.54.101 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0908 11:18:29.590117    9032 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token esrdrt.s724uc2c04tfdq0u --discovery-token-ca-cert-hash sha256:6f0ed86d1fb618064431da971fb4f5228ff7cd998cb290916759978661fe58e6 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-331000-m02 --control-plane --apiserver-advertise-address=172.20.54.101 --apiserver-bind-port=8443"
	I0908 11:19:21.377409    9032 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token esrdrt.s724uc2c04tfdq0u --discovery-token-ca-cert-hash sha256:6f0ed86d1fb618064431da971fb4f5228ff7cd998cb290916759978661fe58e6 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-331000-m02 --control-plane --apiserver-advertise-address=172.20.54.101 --apiserver-bind-port=8443": (51.7865747s)
	I0908 11:19:21.377589    9032 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0908 11:19:22.386354    9032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-331000-m02 minikube.k8s.io/updated_at=2025_09_08T11_19_22_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64 minikube.k8s.io/name=ha-331000 minikube.k8s.io/primary=false
	I0908 11:19:22.580464    9032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-331000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0908 11:19:22.764949    9032 start.go:319] duration metric: took 58.1849501s to joinCluster
	I0908 11:19:22.764949    9032 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.20.54.101 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0908 11:19:22.765945    9032 config.go:182] Loaded profile config "ha-331000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 11:19:22.773206    9032 out.go:179] * Verifying Kubernetes components...
	I0908 11:19:22.787187    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:19:23.323932    9032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 11:19:23.368094    9032 kapi.go:59] client config for ha-331000: &rest.Config{Host:"https://172.20.63.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-331000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-331000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2a967c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0908 11:19:23.368322    9032 kubeadm.go:483] Overriding stale ClientConfig host https://172.20.63.254:8443 with https://172.20.59.73:8443
	I0908 11:19:23.369777    9032 node_ready.go:35] waiting up to 6m0s for node "ha-331000-m02" to be "Ready" ...
	W0908 11:19:25.376510    9032 node_ready.go:57] node "ha-331000-m02" has "Ready":"False" status (will retry)
	W0908 11:19:27.377149    9032 node_ready.go:57] node "ha-331000-m02" has "Ready":"False" status (will retry)
	W0908 11:19:29.377666    9032 node_ready.go:57] node "ha-331000-m02" has "Ready":"False" status (will retry)
	W0908 11:19:31.884734    9032 node_ready.go:57] node "ha-331000-m02" has "Ready":"False" status (will retry)
	W0908 11:19:34.378771    9032 node_ready.go:57] node "ha-331000-m02" has "Ready":"False" status (will retry)
	W0908 11:19:36.876326    9032 node_ready.go:57] node "ha-331000-m02" has "Ready":"False" status (will retry)
	W0908 11:19:38.877658    9032 node_ready.go:57] node "ha-331000-m02" has "Ready":"False" status (will retry)
	W0908 11:19:40.883387    9032 node_ready.go:57] node "ha-331000-m02" has "Ready":"False" status (will retry)
	I0908 11:19:42.876284    9032 node_ready.go:49] node "ha-331000-m02" is "Ready"
	I0908 11:19:42.876359    9032 node_ready.go:38] duration metric: took 19.5063375s for node "ha-331000-m02" to be "Ready" ...
	I0908 11:19:42.876359    9032 api_server.go:52] waiting for apiserver process to appear ...
	I0908 11:19:42.888654    9032 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:19:42.927725    9032 api_server.go:72] duration metric: took 20.162524s to wait for apiserver process to appear ...
	I0908 11:19:42.927815    9032 api_server.go:88] waiting for apiserver healthz status ...
	I0908 11:19:42.927869    9032 api_server.go:253] Checking apiserver healthz at https://172.20.59.73:8443/healthz ...
	I0908 11:19:42.936812    9032 api_server.go:279] https://172.20.59.73:8443/healthz returned 200:
	ok
	I0908 11:19:42.938815    9032 api_server.go:141] control plane version: v1.34.0
	I0908 11:19:42.938815    9032 api_server.go:131] duration metric: took 10.9997ms to wait for apiserver health ...
	I0908 11:19:42.938815    9032 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 11:19:42.947238    9032 system_pods.go:59] 17 kube-system pods found
	I0908 11:19:42.947238    9032 system_pods.go:61] "coredns-66bc5c9577-66pcq" [7d55f59c-2274-4acf-88e6-9d8249a799ec] Running
	I0908 11:19:42.947238    9032 system_pods.go:61] "coredns-66bc5c9577-x595c" [bfc5c253-e38e-4a3f-94b9-fb077529ad73] Running
	I0908 11:19:42.947238    9032 system_pods.go:61] "etcd-ha-331000" [890d6f47-ec6d-4aa4-ab72-2225e83e8acb] Running
	I0908 11:19:42.947238    9032 system_pods.go:61] "etcd-ha-331000-m02" [644ee650-93af-4dfc-9e63-0c6010c65c34] Running
	I0908 11:19:42.947238    9032 system_pods.go:61] "kindnet-mrfp7" [622d9c7c-9041-43af-ad0d-1e0d99f1ae98] Running
	I0908 11:19:42.947238    9032 system_pods.go:61] "kindnet-s8k98" [dc7044c5-20b9-4fbf-9c06-b2f23a5ed855] Running
	I0908 11:19:42.947238    9032 system_pods.go:61] "kube-apiserver-ha-331000" [533211f4-476a-40cb-923d-d6946cb0bfd9] Running
	I0908 11:19:42.947238    9032 system_pods.go:61] "kube-apiserver-ha-331000-m02" [e67a4f62-2619-4cf2-98cc-4e6d89b875dd] Running
	I0908 11:19:42.947238    9032 system_pods.go:61] "kube-controller-manager-ha-331000" [9e07cbfc-5c4f-4cba-b417-651a0d03f65c] Running
	I0908 11:19:42.947238    9032 system_pods.go:61] "kube-controller-manager-ha-331000-m02" [80df16bb-edb3-4a03-98db-78c6cfbc2bc2] Running
	I0908 11:19:42.947238    9032 system_pods.go:61] "kube-proxy-mwwp8" [55328dc6-be8b-4916-aeba-2e0548a7bcfd] Running
	I0908 11:19:42.947238    9032 system_pods.go:61] "kube-proxy-smrc9" [f3ca315f-9042-4fe5-bcb8-4301b3d1ad36] Running
	I0908 11:19:42.947238    9032 system_pods.go:61] "kube-scheduler-ha-331000" [3ea06d6e-8fa2-42fb-9c05-476e41b94f1b] Running
	I0908 11:19:42.947238    9032 system_pods.go:61] "kube-scheduler-ha-331000-m02" [c31e63aa-3501-4822-ae81-7f406ac243ae] Running
	I0908 11:19:42.947238    9032 system_pods.go:61] "kube-vip-ha-331000" [566814e0-10c6-4b7b-a23c-56830dce657d] Running
	I0908 11:19:42.947238    9032 system_pods.go:61] "kube-vip-ha-331000-m02" [3c5112ed-c7e2-4c7a-a428-26fe4cc807d3] Running
	I0908 11:19:42.947238    9032 system_pods.go:61] "storage-provisioner" [91f36133-5872-4bf2-9606-697f746f797f] Running
	I0908 11:19:42.947238    9032 system_pods.go:74] duration metric: took 8.4226ms to wait for pod list to return data ...
	I0908 11:19:42.947238    9032 default_sa.go:34] waiting for default service account to be created ...
	I0908 11:19:42.953347    9032 default_sa.go:45] found service account: "default"
	I0908 11:19:42.953396    9032 default_sa.go:55] duration metric: took 6.158ms for default service account to be created ...
	I0908 11:19:42.953396    9032 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 11:19:42.960606    9032 system_pods.go:86] 17 kube-system pods found
	I0908 11:19:42.960606    9032 system_pods.go:89] "coredns-66bc5c9577-66pcq" [7d55f59c-2274-4acf-88e6-9d8249a799ec] Running
	I0908 11:19:42.960606    9032 system_pods.go:89] "coredns-66bc5c9577-x595c" [bfc5c253-e38e-4a3f-94b9-fb077529ad73] Running
	I0908 11:19:42.960606    9032 system_pods.go:89] "etcd-ha-331000" [890d6f47-ec6d-4aa4-ab72-2225e83e8acb] Running
	I0908 11:19:42.960794    9032 system_pods.go:89] "etcd-ha-331000-m02" [644ee650-93af-4dfc-9e63-0c6010c65c34] Running
	I0908 11:19:42.960794    9032 system_pods.go:89] "kindnet-mrfp7" [622d9c7c-9041-43af-ad0d-1e0d99f1ae98] Running
	I0908 11:19:42.960794    9032 system_pods.go:89] "kindnet-s8k98" [dc7044c5-20b9-4fbf-9c06-b2f23a5ed855] Running
	I0908 11:19:42.960794    9032 system_pods.go:89] "kube-apiserver-ha-331000" [533211f4-476a-40cb-923d-d6946cb0bfd9] Running
	I0908 11:19:42.960794    9032 system_pods.go:89] "kube-apiserver-ha-331000-m02" [e67a4f62-2619-4cf2-98cc-4e6d89b875dd] Running
	I0908 11:19:42.960794    9032 system_pods.go:89] "kube-controller-manager-ha-331000" [9e07cbfc-5c4f-4cba-b417-651a0d03f65c] Running
	I0908 11:19:42.960794    9032 system_pods.go:89] "kube-controller-manager-ha-331000-m02" [80df16bb-edb3-4a03-98db-78c6cfbc2bc2] Running
	I0908 11:19:42.960794    9032 system_pods.go:89] "kube-proxy-mwwp8" [55328dc6-be8b-4916-aeba-2e0548a7bcfd] Running
	I0908 11:19:42.960794    9032 system_pods.go:89] "kube-proxy-smrc9" [f3ca315f-9042-4fe5-bcb8-4301b3d1ad36] Running
	I0908 11:19:42.960919    9032 system_pods.go:89] "kube-scheduler-ha-331000" [3ea06d6e-8fa2-42fb-9c05-476e41b94f1b] Running
	I0908 11:19:42.960919    9032 system_pods.go:89] "kube-scheduler-ha-331000-m02" [c31e63aa-3501-4822-ae81-7f406ac243ae] Running
	I0908 11:19:42.960919    9032 system_pods.go:89] "kube-vip-ha-331000" [566814e0-10c6-4b7b-a23c-56830dce657d] Running
	I0908 11:19:42.960919    9032 system_pods.go:89] "kube-vip-ha-331000-m02" [3c5112ed-c7e2-4c7a-a428-26fe4cc807d3] Running
	I0908 11:19:42.960919    9032 system_pods.go:89] "storage-provisioner" [91f36133-5872-4bf2-9606-697f746f797f] Running
	I0908 11:19:42.960919    9032 system_pods.go:126] duration metric: took 7.5236ms to wait for k8s-apps to be running ...
	I0908 11:19:42.960919    9032 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 11:19:42.971535    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:19:43.006887    9032 system_svc.go:56] duration metric: took 45.9666ms WaitForService to wait for kubelet
	I0908 11:19:43.006887    9032 kubeadm.go:578] duration metric: took 20.2416846s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 11:19:43.006887    9032 node_conditions.go:102] verifying NodePressure condition ...
	I0908 11:19:43.015011    9032 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 11:19:43.015011    9032 node_conditions.go:123] node cpu capacity is 2
	I0908 11:19:43.015550    9032 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 11:19:43.015550    9032 node_conditions.go:123] node cpu capacity is 2
	I0908 11:19:43.015550    9032 node_conditions.go:105] duration metric: took 8.663ms to run NodePressure ...
	I0908 11:19:43.015550    9032 start.go:241] waiting for startup goroutines ...
	I0908 11:19:43.015710    9032 start.go:255] writing updated cluster config ...
	I0908 11:19:43.020324    9032 out.go:203] 
	I0908 11:19:43.037058    9032 config.go:182] Loaded profile config "ha-331000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 11:19:43.037058    9032 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\config.json ...
	I0908 11:19:43.044269    9032 out.go:179] * Starting "ha-331000-m03" control-plane node in "ha-331000" cluster
	I0908 11:19:43.046899    9032 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0908 11:19:43.046957    9032 cache.go:58] Caching tarball of preloaded images
	I0908 11:19:43.047022    9032 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0908 11:19:43.047555    9032 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0908 11:19:43.047755    9032 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\config.json ...
	I0908 11:19:43.063313    9032 start.go:360] acquireMachinesLock for ha-331000-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0908 11:19:43.063569    9032 start.go:364] duration metric: took 256.1µs to acquireMachinesLock for "ha-331000-m03"
	I0908 11:19:43.063768    9032 start.go:93] Provisioning new machine with config: &{Name:ha-331000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.34.0 ClusterName:ha-331000 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.59.73 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.54.101 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingres
s:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0908 11:19:43.064021    9032 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0908 11:19:43.069612    9032 out.go:252] * Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0908 11:19:43.069876    9032 start.go:159] libmachine.API.Create for "ha-331000" (driver="hyperv")
	I0908 11:19:43.069876    9032 client.go:168] LocalClient.Create starting
	I0908 11:19:43.070680    9032 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0908 11:19:43.070680    9032 main.go:141] libmachine: Decoding PEM data...
	I0908 11:19:43.070680    9032 main.go:141] libmachine: Parsing certificate...
	I0908 11:19:43.070680    9032 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0908 11:19:43.071468    9032 main.go:141] libmachine: Decoding PEM data...
	I0908 11:19:43.071468    9032 main.go:141] libmachine: Parsing certificate...
	I0908 11:19:43.071468    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0908 11:19:44.989307    9032 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0908 11:19:44.989384    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:19:44.989384    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0908 11:19:46.741169    9032 main.go:141] libmachine: [stdout =====>] : False
	
	I0908 11:19:46.741276    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:19:46.741276    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0908 11:19:48.293658    9032 main.go:141] libmachine: [stdout =====>] : True
	
	I0908 11:19:48.293658    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:19:48.294794    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0908 11:19:52.151256    9032 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0908 11:19:52.151349    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:19:52.152786    9032 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso...
	I0908 11:19:52.752995    9032 main.go:141] libmachine: Creating SSH key...
	I0908 11:19:52.858511    9032 main.go:141] libmachine: Creating VM...
	I0908 11:19:52.858511    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0908 11:19:55.925145    9032 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0908 11:19:55.925145    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:19:55.925296    9032 main.go:141] libmachine: Using switch "Default Switch"
	I0908 11:19:55.925434    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0908 11:19:57.781755    9032 main.go:141] libmachine: [stdout =====>] : True
	
	I0908 11:19:57.782114    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:19:57.782114    9032 main.go:141] libmachine: Creating VHD
	I0908 11:19:57.782114    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0908 11:20:01.556936    9032 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 8ED16FBC-0547-451D-A5C7-C13BFEC5F949
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0908 11:20:01.557722    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:01.557722    9032 main.go:141] libmachine: Writing magic tar header
	I0908 11:20:01.557722    9032 main.go:141] libmachine: Writing SSH key tar header
	I0908 11:20:01.571975    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0908 11:20:04.713258    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:20:04.713843    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:04.713843    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m03\disk.vhd' -SizeBytes 20000MB
	I0908 11:20:07.197832    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:20:07.197832    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:07.197962    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-331000-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 3072MB
	I0908 11:20:10.788379    9032 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-331000-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0908 11:20:10.788414    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:10.788414    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-331000-m03 -DynamicMemoryEnabled $false
	I0908 11:20:13.038761    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:20:13.038761    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:13.038978    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-331000-m03 -Count 2
	I0908 11:20:15.184004    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:20:15.184362    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:15.184436    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-331000-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m03\boot2docker.iso'
	I0908 11:20:17.717638    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:20:17.717638    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:17.718635    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-331000-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m03\disk.vhd'
	I0908 11:20:20.377272    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:20:20.377272    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:20.378152    9032 main.go:141] libmachine: Starting VM...
	I0908 11:20:20.378203    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-331000-m03
	I0908 11:20:23.508038    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:20:23.508038    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:23.508038    9032 main.go:141] libmachine: Waiting for host to start...
	I0908 11:20:23.508038    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:20:25.859623    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:20:25.859623    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:25.859623    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:20:28.437268    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:20:28.437268    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:29.438421    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:20:31.691135    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:20:31.691265    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:31.691446    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:20:34.354445    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:20:34.354445    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:35.355702    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:20:37.603186    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:20:37.603186    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:37.603186    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:20:40.159827    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:20:40.159827    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:41.160799    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:20:43.401461    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:20:43.401461    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:43.401581    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:20:45.979936    9032 main.go:141] libmachine: [stdout =====>] : 
	I0908 11:20:45.979936    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:46.980313    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:20:49.261692    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:20:49.262636    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:49.262705    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:20:52.121155    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:20:52.121155    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:52.122201    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:20:54.385031    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:20:54.385031    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:54.385031    9032 machine.go:93] provisionDockerMachine start ...
	I0908 11:20:54.385031    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:20:56.574458    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:20:56.574458    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:56.574748    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:20:59.120434    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:20:59.120757    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:20:59.127856    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:20:59.128882    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.56.88 22 <nil> <nil>}
	I0908 11:20:59.128882    9032 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 11:20:59.274979    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0908 11:20:59.275072    9032 buildroot.go:166] provisioning hostname "ha-331000-m03"
	I0908 11:20:59.275140    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:21:01.402631    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:21:01.402631    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:01.403018    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:21:03.994412    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:21:03.994722    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:04.000450    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:21:04.000995    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.56.88 22 <nil> <nil>}
	I0908 11:21:04.001096    9032 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-331000-m03 && echo "ha-331000-m03" | sudo tee /etc/hostname
	I0908 11:21:04.171198    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-331000-m03
	
	I0908 11:21:04.171198    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:21:06.299116    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:21:06.299523    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:06.299523    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:21:08.838786    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:21:08.839879    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:08.846030    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:21:08.846618    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.56.88 22 <nil> <nil>}
	I0908 11:21:08.846651    9032 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-331000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-331000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-331000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 11:21:09.008823    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 11:21:09.008823    9032 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0908 11:21:09.008940    9032 buildroot.go:174] setting up certificates
	I0908 11:21:09.008940    9032 provision.go:84] configureAuth start
	I0908 11:21:09.009035    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:21:11.095960    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:21:11.095960    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:11.096855    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:21:13.685526    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:21:13.686557    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:13.686767    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:21:15.772996    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:21:15.773469    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:15.773551    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:21:18.328117    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:21:18.328117    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:18.328117    9032 provision.go:143] copyHostCerts
	I0908 11:21:18.328117    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0908 11:21:18.328117    9032 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0908 11:21:18.328117    9032 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0908 11:21:18.329058    9032 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0908 11:21:18.330007    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0908 11:21:18.330007    9032 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0908 11:21:18.330007    9032 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0908 11:21:18.330804    9032 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0908 11:21:18.332086    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0908 11:21:18.332361    9032 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0908 11:21:18.332361    9032 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0908 11:21:18.332878    9032 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1671 bytes)
	I0908 11:21:18.333534    9032 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-331000-m03 san=[127.0.0.1 172.20.56.88 ha-331000-m03 localhost minikube]
	I0908 11:21:18.650549    9032 provision.go:177] copyRemoteCerts
	I0908 11:21:18.659544    9032 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 11:21:18.659544    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:21:20.807232    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:21:20.807232    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:20.807979    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:21:23.319737    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:21:23.319737    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:23.320250    9032 sshutil.go:53] new ssh client: &{IP:172.20.56.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m03\id_rsa Username:docker}
	I0908 11:21:23.432422    9032 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7728185s)
	I0908 11:21:23.432422    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0908 11:21:23.432422    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 11:21:23.486505    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0908 11:21:23.486505    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0908 11:21:23.539516    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0908 11:21:23.539516    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0908 11:21:23.596048    9032 provision.go:87] duration metric: took 14.5869253s to configureAuth
	I0908 11:21:23.596048    9032 buildroot.go:189] setting minikube options for container-runtime
	I0908 11:21:23.596624    9032 config.go:182] Loaded profile config "ha-331000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 11:21:23.596989    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:21:25.720420    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:21:25.720420    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:25.721090    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:21:28.328722    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:21:28.328722    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:28.336931    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:21:28.337092    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.56.88 22 <nil> <nil>}
	I0908 11:21:28.337092    9032 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0908 11:21:28.487261    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0908 11:21:28.487261    9032 buildroot.go:70] root file system type: tmpfs
	I0908 11:21:28.487490    9032 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0908 11:21:28.487599    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:21:30.593354    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:21:30.593354    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:30.593354    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:21:33.141874    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:21:33.141874    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:33.147778    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:21:33.147778    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.56.88 22 <nil> <nil>}
	I0908 11:21:33.148355    9032 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=172.20.59.73"
	Environment="NO_PROXY=172.20.59.73,172.20.54.101"
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0908 11:21:33.326293    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=172.20.59.73
	Environment=NO_PROXY=172.20.59.73,172.20.54.101
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0908 11:21:33.326293    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:21:35.481077    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:21:35.481880    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:35.481880    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:21:38.057205    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:21:38.057926    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:38.063241    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:21:38.063918    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.56.88 22 <nil> <nil>}
	I0908 11:21:38.063918    9032 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0908 11:21:39.452136    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I0908 11:21:39.452136    9032 machine.go:96] duration metric: took 45.0665419s to provisionDockerMachine
	I0908 11:21:39.452136    9032 client.go:171] duration metric: took 1m56.380817s to LocalClient.Create
	I0908 11:21:39.452136    9032 start.go:167] duration metric: took 1m56.380817s to libmachine.API.Create "ha-331000"
	I0908 11:21:39.452136    9032 start.go:293] postStartSetup for "ha-331000-m03" (driver="hyperv")
	I0908 11:21:39.452136    9032 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 11:21:39.465222    9032 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 11:21:39.465222    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:21:41.583555    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:21:41.583555    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:41.583639    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:21:44.143282    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:21:44.143488    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:44.143964    9032 sshutil.go:53] new ssh client: &{IP:172.20.56.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m03\id_rsa Username:docker}
	I0908 11:21:44.264444    9032 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7991625s)
	I0908 11:21:44.275613    9032 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 11:21:44.283795    9032 info.go:137] Remote host: Buildroot 2025.02
	I0908 11:21:44.283880    9032 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0908 11:21:44.284128    9032 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0908 11:21:44.285463    9032 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> 116282.pem in /etc/ssl/certs
	I0908 11:21:44.285463    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> /etc/ssl/certs/116282.pem
	I0908 11:21:44.295831    9032 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 11:21:44.318408    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem --> /etc/ssl/certs/116282.pem (1708 bytes)
	I0908 11:21:44.376352    9032 start.go:296] duration metric: took 4.9241542s for postStartSetup
	I0908 11:21:44.379710    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:21:46.472651    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:21:46.473445    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:46.473544    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:21:48.999344    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:21:48.999836    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:49.000105    9032 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\config.json ...
	I0908 11:21:49.002660    9032 start.go:128] duration metric: took 2m5.9369462s to createHost
	I0908 11:21:49.002777    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:21:51.080900    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:21:51.080900    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:51.080900    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:21:53.618126    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:21:53.618901    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:53.625116    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:21:53.625651    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.56.88 22 <nil> <nil>}
	I0908 11:21:53.625744    9032 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0908 11:21:53.769059    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757330513.767382949
	
	I0908 11:21:53.769137    9032 fix.go:216] guest clock: 1757330513.767382949
	I0908 11:21:53.769137    9032 fix.go:229] Guest: 2025-09-08 11:21:53.767382949 +0000 UTC Remote: 2025-09-08 11:21:49.0026609 +0000 UTC m=+558.731019101 (delta=4.764722049s)
	I0908 11:21:53.769137    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:21:55.901636    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:21:55.901636    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:55.902130    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:21:58.463792    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:21:58.463792    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:21:58.471355    9032 main.go:141] libmachine: Using SSH client type: native
	I0908 11:21:58.472040    9032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.56.88 22 <nil> <nil>}
	I0908 11:21:58.472124    9032 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1757330513
	I0908 11:21:58.634236    9032 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Sep  8 11:21:53 UTC 2025
	
	I0908 11:21:58.634236    9032 fix.go:236] clock set: Mon Sep  8 11:21:53 UTC 2025
	 (err=<nil>)
	I0908 11:21:58.634236    9032 start.go:83] releasing machines lock for "ha-331000-m03", held for 2m15.5689839s
	I0908 11:21:58.634236    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:22:00.790543    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:22:00.791315    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:22:00.791389    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:22:03.292724    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:22:03.293782    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:22:03.297035    9032 out.go:179] * Found network options:
	I0908 11:22:03.302590    9032 out.go:179]   - NO_PROXY=172.20.59.73,172.20.54.101
	W0908 11:22:03.307595    9032 proxy.go:120] fail to check proxy env: Error ip not in block
	W0908 11:22:03.307595    9032 proxy.go:120] fail to check proxy env: Error ip not in block
	I0908 11:22:03.310591    9032 out.go:179]   - NO_PROXY=172.20.59.73,172.20.54.101
	W0908 11:22:03.316602    9032 proxy.go:120] fail to check proxy env: Error ip not in block
	W0908 11:22:03.316602    9032 proxy.go:120] fail to check proxy env: Error ip not in block
	W0908 11:22:03.319010    9032 proxy.go:120] fail to check proxy env: Error ip not in block
	W0908 11:22:03.319111    9032 proxy.go:120] fail to check proxy env: Error ip not in block
	I0908 11:22:03.321338    9032 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0908 11:22:03.321338    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:22:03.334013    9032 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 11:22:03.334013    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000-m03 ).state
	I0908 11:22:05.561625    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:22:05.562601    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:22:05.562485    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:22:05.562601    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:22:05.562686    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:22:05.562686    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000-m03 ).networkadapters[0]).ipaddresses[0]
	I0908 11:22:08.207621    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:22:08.207910    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:22:08.208123    9032 sshutil.go:53] new ssh client: &{IP:172.20.56.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m03\id_rsa Username:docker}
	I0908 11:22:08.249218    9032 main.go:141] libmachine: [stdout =====>] : 172.20.56.88
	
	I0908 11:22:08.249218    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:22:08.249663    9032 sshutil.go:53] new ssh client: &{IP:172.20.56.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000-m03\id_rsa Username:docker}
	I0908 11:22:08.314281    9032 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9928805s)
	W0908 11:22:08.314364    9032 start.go:868] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0908 11:22:08.350255    9032 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0161799s)
	W0908 11:22:08.350255    9032 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0908 11:22:08.363382    9032 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 11:22:08.397632    9032 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0908 11:22:08.397632    9032 start.go:495] detecting cgroup driver to use...
	I0908 11:22:08.398056    9032 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 11:22:08.449391    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W0908 11:22:08.459289    9032 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0908 11:22:08.459356    9032 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0908 11:22:08.487869    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0908 11:22:08.513248    9032 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0908 11:22:08.523988    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0908 11:22:08.560890    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 11:22:08.596510    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0908 11:22:08.634157    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 11:22:08.667723    9032 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 11:22:08.700890    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0908 11:22:08.735841    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0908 11:22:08.769037    9032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0908 11:22:08.801439    9032 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 11:22:08.820155    9032 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0908 11:22:08.834386    9032 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0908 11:22:08.872615    9032 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 11:22:08.903269    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:22:09.144835    9032 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0908 11:22:09.204982    9032 start.go:495] detecting cgroup driver to use...
	I0908 11:22:09.215465    9032 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0908 11:22:09.250905    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 11:22:09.288405    9032 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 11:22:09.329084    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 11:22:09.366596    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 11:22:09.401357    9032 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0908 11:22:09.467495    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 11:22:09.492227    9032 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 11:22:09.546331    9032 ssh_runner.go:195] Run: which cri-dockerd
	I0908 11:22:09.565814    9032 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0908 11:22:09.587404    9032 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0908 11:22:09.635590    9032 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0908 11:22:09.884840    9032 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0908 11:22:10.109830    9032 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0908 11:22:10.109937    9032 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0908 11:22:10.164114    9032 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0908 11:22:10.200775    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:22:10.455783    9032 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0908 11:22:11.192386    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 11:22:11.229790    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0908 11:22:11.269738    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0908 11:22:11.306911    9032 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0908 11:22:11.548276    9032 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0908 11:22:11.799006    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:22:12.031232    9032 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0908 11:22:12.107021    9032 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0908 11:22:12.147642    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:22:12.401058    9032 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0908 11:22:12.570410    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0908 11:22:12.602409    9032 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0908 11:22:12.620876    9032 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0908 11:22:12.632292    9032 start.go:563] Will wait 60s for crictl version
	I0908 11:22:12.643747    9032 ssh_runner.go:195] Run: which crictl
	I0908 11:22:12.662962    9032 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 11:22:12.721009    9032 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0908 11:22:12.731772    9032 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0908 11:22:12.778545    9032 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0908 11:22:12.815623    9032 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0908 11:22:12.820138    9032 out.go:179]   - env NO_PROXY=172.20.59.73
	I0908 11:22:12.823329    9032 out.go:179]   - env NO_PROXY=172.20.59.73,172.20.54.101
	I0908 11:22:12.825692    9032 ip.go:180] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0908 11:22:12.830601    9032 ip.go:194] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0908 11:22:12.830601    9032 ip.go:194] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0908 11:22:12.830601    9032 ip.go:189] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0908 11:22:12.830601    9032 ip.go:215] Found interface: {Index:17 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4f:5e:c2 Flags:up|broadcast|multicast|running}
	I0908 11:22:12.833099    9032 ip.go:218] interface addr: fe80::a43d:dd17:5b4e:e872/64
	I0908 11:22:12.833099    9032 ip.go:218] interface addr: 172.20.48.1/20
	I0908 11:22:12.844764    9032 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0908 11:22:12.852639    9032 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 11:22:12.877193    9032 mustload.go:65] Loading cluster: ha-331000
	I0908 11:22:12.879633    9032 config.go:182] Loaded profile config "ha-331000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 11:22:12.880478    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:22:14.965550    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:22:14.966367    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:22:14.966367    9032 host.go:66] Checking if "ha-331000" exists ...
	I0908 11:22:14.967400    9032 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000 for IP: 172.20.56.88
	I0908 11:22:14.967460    9032 certs.go:194] generating shared ca certs ...
	I0908 11:22:14.967460    9032 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:22:14.968372    9032 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0908 11:22:14.968849    9032 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0908 11:22:14.969066    9032 certs.go:256] generating profile certs ...
	I0908 11:22:14.969752    9032 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\client.key
	I0908 11:22:14.969967    9032 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key.e6406abe
	I0908 11:22:14.970085    9032 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt.e6406abe with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.59.73 172.20.54.101 172.20.56.88 172.20.63.254]
	I0908 11:22:15.122275    9032 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt.e6406abe ...
	I0908 11:22:15.122275    9032 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt.e6406abe: {Name:mk057b623324d456dec2f27ef6117b08481c86d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:22:15.124333    9032 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key.e6406abe ...
	I0908 11:22:15.124333    9032 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key.e6406abe: {Name:mk1ebc4009e0b98e764cd6b67eb2845cce8f259f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:22:15.125270    9032 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt.e6406abe -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt
	I0908 11:22:15.142205    9032 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key.e6406abe -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key
	I0908 11:22:15.144098    9032 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.key
	I0908 11:22:15.144098    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0908 11:22:15.144098    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0908 11:22:15.144098    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0908 11:22:15.144947    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0908 11:22:15.145127    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0908 11:22:15.145554    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0908 11:22:15.145732    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0908 11:22:15.145978    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0908 11:22:15.146094    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem (1338 bytes)
	W0908 11:22:15.146628    9032 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628_empty.pem, impossibly tiny 0 bytes
	I0908 11:22:15.146877    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0908 11:22:15.147196    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0908 11:22:15.147922    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0908 11:22:15.148336    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1671 bytes)
	I0908 11:22:15.149091    9032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem (1708 bytes)
	I0908 11:22:15.149122    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> /usr/share/ca-certificates/116282.pem
	I0908 11:22:15.149122    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:22:15.149656    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem -> /usr/share/ca-certificates/11628.pem
	I0908 11:22:15.149929    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:22:17.288154    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:22:17.289104    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:22:17.289286    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:22:19.842364    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:22:19.842364    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:22:19.843763    9032 sshutil.go:53] new ssh client: &{IP:172.20.59.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\id_rsa Username:docker}
	I0908 11:22:19.951453    9032 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0908 11:22:19.959621    9032 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0908 11:22:19.993915    9032 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0908 11:22:20.000916    9032 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0908 11:22:20.036021    9032 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0908 11:22:20.044895    9032 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0908 11:22:20.081356    9032 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0908 11:22:20.089317    9032 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0908 11:22:20.125644    9032 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0908 11:22:20.132652    9032 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0908 11:22:20.166854    9032 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0908 11:22:20.174348    9032 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0908 11:22:20.199308    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 11:22:20.255911    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0908 11:22:20.310231    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 11:22:20.364512    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0908 11:22:20.417575    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0908 11:22:20.472239    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 11:22:20.528239    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 11:22:20.587739    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-331000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 11:22:20.645424    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem --> /usr/share/ca-certificates/116282.pem (1708 bytes)
	I0908 11:22:20.706396    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 11:22:20.761883    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem --> /usr/share/ca-certificates/11628.pem (1338 bytes)
	I0908 11:22:20.815121    9032 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0908 11:22:20.852947    9032 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0908 11:22:20.887898    9032 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0908 11:22:20.926289    9032 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0908 11:22:20.968061    9032 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0908 11:22:21.018821    9032 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0908 11:22:21.062823    9032 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0908 11:22:21.122240    9032 ssh_runner.go:195] Run: openssl version
	I0908 11:22:21.144790    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116282.pem && ln -fs /usr/share/ca-certificates/116282.pem /etc/ssl/certs/116282.pem"
	I0908 11:22:21.182991    9032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116282.pem
	I0908 11:22:21.192279    9032 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 10:54 /usr/share/ca-certificates/116282.pem
	I0908 11:22:21.203708    9032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116282.pem
	I0908 11:22:21.225998    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116282.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 11:22:21.263877    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 11:22:21.298309    9032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:22:21.305840    9032 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:22:21.316656    9032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:22:21.340985    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 11:22:21.374188    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11628.pem && ln -fs /usr/share/ca-certificates/11628.pem /etc/ssl/certs/11628.pem"
	I0908 11:22:21.410230    9032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11628.pem
	I0908 11:22:21.418283    9032 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 10:54 /usr/share/ca-certificates/11628.pem
	I0908 11:22:21.429136    9032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11628.pem
	I0908 11:22:21.450522    9032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11628.pem /etc/ssl/certs/51391683.0"
	I0908 11:22:21.483940    9032 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 11:22:21.493960    9032 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0908 11:22:21.494945    9032 kubeadm.go:926] updating node {m03 172.20.56.88 8443 v1.34.0 docker true true} ...
	I0908 11:22:21.494945    9032 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-331000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.56.88
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-331000 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 11:22:21.494945    9032 kube-vip.go:115] generating kube-vip config ...
	I0908 11:22:21.505888    9032 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0908 11:22:21.540478    9032 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0908 11:22:21.540478    9032 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.20.63.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0908 11:22:21.551774    9032 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 11:22:21.573508    9032 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.0': No such file or directory
	
	Initiating transfer...
	I0908 11:22:21.586850    9032 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.0
	I0908 11:22:21.610122    9032 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubeadm.sha256
	I0908 11:22:21.610122    9032 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubelet.sha256
	I0908 11:22:21.610122    9032 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
	I0908 11:22:21.610122    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubeadm -> /var/lib/minikube/binaries/v1.34.0/kubeadm
	I0908 11:22:21.610122    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubectl -> /var/lib/minikube/binaries/v1.34.0/kubectl
	I0908 11:22:21.625329    9032 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubectl
	I0908 11:22:21.625329    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:22:21.625842    9032 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubeadm
	I0908 11:22:21.639267    9032 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0/kubectl': No such file or directory
	I0908 11:22:21.639439    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubectl --> /var/lib/minikube/binaries/v1.34.0/kubectl (60559544 bytes)
	I0908 11:22:21.682110    9032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubelet -> /var/lib/minikube/binaries/v1.34.0/kubelet
	I0908 11:22:21.682110    9032 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0/kubeadm': No such file or directory
	I0908 11:22:21.682110    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubeadm --> /var/lib/minikube/binaries/v1.34.0/kubeadm (74027192 bytes)
	I0908 11:22:21.693165    9032 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubelet
	I0908 11:22:21.725913    9032 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0/kubelet': No such file or directory
	I0908 11:22:21.725913    9032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubelet --> /var/lib/minikube/binaries/v1.34.0/kubelet (59195684 bytes)
	I0908 11:22:22.929193    9032 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0908 11:22:22.952184    9032 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0908 11:22:22.991701    9032 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 11:22:23.035613    9032 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0908 11:22:23.106287    9032 ssh_runner.go:195] Run: grep 172.20.63.254	control-plane.minikube.internal$ /etc/hosts
	I0908 11:22:23.113936    9032 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.63.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 11:22:23.154346    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:22:23.415159    9032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 11:22:23.464173    9032 host.go:66] Checking if "ha-331000" exists ...
	I0908 11:22:23.464173    9032 start.go:317] joinCluster: &{Name:ha-331000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clust
erName:ha-331000 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.59.73 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.54.101 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.20.56.88 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:22:23.464173    9032 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0908 11:22:23.464173    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-331000 ).state
	I0908 11:22:25.638291    9032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 11:22:25.639234    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:22:25.639311    9032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-331000 ).networkadapters[0]).ipaddresses[0]
	I0908 11:22:28.260838    9032 main.go:141] libmachine: [stdout =====>] : 172.20.59.73
	
	I0908 11:22:28.260838    9032 main.go:141] libmachine: [stderr =====>] : 
	I0908 11:22:28.261558    9032 sshutil.go:53] new ssh client: &{IP:172.20.59.73 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-331000\id_rsa Username:docker}
	I0908 11:22:28.595706    9032 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0": (5.1313418s)
	I0908 11:22:28.595797    9032 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.20.56.88 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0908 11:22:28.595797    9032 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token irms23.r745u42ppm7pmtog --discovery-token-ca-cert-hash sha256:6f0ed86d1fb618064431da971fb4f5228ff7cd998cb290916759978661fe58e6 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-331000-m03 --control-plane --apiserver-advertise-address=172.20.56.88 --apiserver-bind-port=8443"
	I0908 11:23:31.186137    9032 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token irms23.r745u42ppm7pmtog --discovery-token-ca-cert-hash sha256:6f0ed86d1fb618064431da971fb4f5228ff7cd998cb290916759978661fe58e6 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-331000-m03 --control-plane --apiserver-advertise-address=172.20.56.88 --apiserver-bind-port=8443": (1m2.5895509s)
	I0908 11:23:31.186137    9032 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0908 11:23:31.978856    9032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-331000-m03 minikube.k8s.io/updated_at=2025_09_08T11_23_31_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64 minikube.k8s.io/name=ha-331000 minikube.k8s.io/primary=false
	I0908 11:23:32.187302    9032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-331000-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0908 11:23:32.377511    9032 start.go:319] duration metric: took 1m8.9124703s to joinCluster
	I0908 11:23:32.377511    9032 start.go:235] Will wait 6m0s for node &{Name:m03 IP:172.20.56.88 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0908 11:23:32.378520    9032 config.go:182] Loaded profile config "ha-331000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 11:23:32.386523    9032 out.go:179] * Verifying Kubernetes components...
	I0908 11:23:32.400511    9032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:23:32.847437    9032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 11:23:32.903394    9032 kapi.go:59] client config for ha-331000: &rest.Config{Host:"https://172.20.63.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-331000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-331000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2a967c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0908 11:23:32.903595    9032 kubeadm.go:483] Overriding stale ClientConfig host https://172.20.63.254:8443 with https://172.20.59.73:8443
	I0908 11:23:32.904581    9032 node_ready.go:35] waiting up to 6m0s for node "ha-331000-m03" to be "Ready" ...
	W0908 11:23:34.946251    9032 node_ready.go:57] node "ha-331000-m03" has "Ready":"False" status (will retry)
	W0908 11:23:37.410476    9032 node_ready.go:57] node "ha-331000-m03" has "Ready":"False" status (will retry)
	W0908 11:23:39.412034    9032 node_ready.go:57] node "ha-331000-m03" has "Ready":"False" status (will retry)
	W0908 11:23:41.415064    9032 node_ready.go:57] node "ha-331000-m03" has "Ready":"False" status (will retry)
	W0908 11:23:43.910231    9032 node_ready.go:57] node "ha-331000-m03" has "Ready":"False" status (will retry)
	W0908 11:23:45.911130    9032 node_ready.go:57] node "ha-331000-m03" has "Ready":"False" status (will retry)
	W0908 11:23:48.413774    9032 node_ready.go:57] node "ha-331000-m03" has "Ready":"False" status (will retry)
	W0908 11:23:50.911027    9032 node_ready.go:57] node "ha-331000-m03" has "Ready":"False" status (will retry)
	I0908 11:23:52.912039    9032 node_ready.go:49] node "ha-331000-m03" is "Ready"
	I0908 11:23:52.912039    9032 node_ready.go:38] duration metric: took 20.0071221s for node "ha-331000-m03" to be "Ready" ...
	I0908 11:23:52.912039    9032 api_server.go:52] waiting for apiserver process to appear ...
	I0908 11:23:52.924032    9032 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:23:52.967087    9032 api_server.go:72] duration metric: took 20.589316s to wait for apiserver process to appear ...
	I0908 11:23:52.967186    9032 api_server.go:88] waiting for apiserver healthz status ...
	I0908 11:23:52.967186    9032 api_server.go:253] Checking apiserver healthz at https://172.20.59.73:8443/healthz ...
	I0908 11:23:52.975045    9032 api_server.go:279] https://172.20.59.73:8443/healthz returned 200:
	ok
	I0908 11:23:52.977049    9032 api_server.go:141] control plane version: v1.34.0
	I0908 11:23:52.977049    9032 api_server.go:131] duration metric: took 9.8628ms to wait for apiserver health ...
	I0908 11:23:52.977049    9032 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 11:23:52.988634    9032 system_pods.go:59] 24 kube-system pods found
	I0908 11:23:52.988634    9032 system_pods.go:61] "coredns-66bc5c9577-66pcq" [7d55f59c-2274-4acf-88e6-9d8249a799ec] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "coredns-66bc5c9577-x595c" [bfc5c253-e38e-4a3f-94b9-fb077529ad73] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "etcd-ha-331000" [890d6f47-ec6d-4aa4-ab72-2225e83e8acb] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "etcd-ha-331000-m02" [644ee650-93af-4dfc-9e63-0c6010c65c34] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "etcd-ha-331000-m03" [f3e07fd8-babb-48c8-b2ee-98ac1f0774a6] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kindnet-62t6b" [20cef753-27c5-4104-b55a-e06cd9dfdd13] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kindnet-mrfp7" [622d9c7c-9041-43af-ad0d-1e0d99f1ae98] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kindnet-s8k98" [dc7044c5-20b9-4fbf-9c06-b2f23a5ed855] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kube-apiserver-ha-331000" [533211f4-476a-40cb-923d-d6946cb0bfd9] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kube-apiserver-ha-331000-m02" [e67a4f62-2619-4cf2-98cc-4e6d89b875dd] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kube-apiserver-ha-331000-m03" [54e7e79c-00c9-4495-9ce6-7cff1c216b77] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kube-controller-manager-ha-331000" [9e07cbfc-5c4f-4cba-b417-651a0d03f65c] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kube-controller-manager-ha-331000-m02" [80df16bb-edb3-4a03-98db-78c6cfbc2bc2] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kube-controller-manager-ha-331000-m03" [88083c0b-2e89-4b83-80f5-496186f1c17d] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kube-proxy-kt6wd" [b04aa754-6d79-4baa-81e8-215962b8505d] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kube-proxy-mwwp8" [55328dc6-be8b-4916-aeba-2e0548a7bcfd] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kube-proxy-smrc9" [f3ca315f-9042-4fe5-bcb8-4301b3d1ad36] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kube-scheduler-ha-331000" [3ea06d6e-8fa2-42fb-9c05-476e41b94f1b] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kube-scheduler-ha-331000-m02" [c31e63aa-3501-4822-ae81-7f406ac243ae] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kube-scheduler-ha-331000-m03" [790e3732-e3a1-4450-bf45-9cd8bc369180] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kube-vip-ha-331000" [566814e0-10c6-4b7b-a23c-56830dce657d] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kube-vip-ha-331000-m02" [3c5112ed-c7e2-4c7a-a428-26fe4cc807d3] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "kube-vip-ha-331000-m03" [0748fba6-8fdc-46c7-ac09-f0b39aff443d] Running
	I0908 11:23:52.988634    9032 system_pods.go:61] "storage-provisioner" [91f36133-5872-4bf2-9606-697f746f797f] Running
	I0908 11:23:52.988634    9032 system_pods.go:74] duration metric: took 11.5848ms to wait for pod list to return data ...
	I0908 11:23:52.988634    9032 default_sa.go:34] waiting for default service account to be created ...
	I0908 11:23:52.995358    9032 default_sa.go:45] found service account: "default"
	I0908 11:23:52.995618    9032 default_sa.go:55] duration metric: took 6.9841ms for default service account to be created ...
	I0908 11:23:52.995618    9032 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 11:23:53.006498    9032 system_pods.go:86] 24 kube-system pods found
	I0908 11:23:53.006578    9032 system_pods.go:89] "coredns-66bc5c9577-66pcq" [7d55f59c-2274-4acf-88e6-9d8249a799ec] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "coredns-66bc5c9577-x595c" [bfc5c253-e38e-4a3f-94b9-fb077529ad73] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "etcd-ha-331000" [890d6f47-ec6d-4aa4-ab72-2225e83e8acb] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "etcd-ha-331000-m02" [644ee650-93af-4dfc-9e63-0c6010c65c34] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "etcd-ha-331000-m03" [f3e07fd8-babb-48c8-b2ee-98ac1f0774a6] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kindnet-62t6b" [20cef753-27c5-4104-b55a-e06cd9dfdd13] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kindnet-mrfp7" [622d9c7c-9041-43af-ad0d-1e0d99f1ae98] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kindnet-s8k98" [dc7044c5-20b9-4fbf-9c06-b2f23a5ed855] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kube-apiserver-ha-331000" [533211f4-476a-40cb-923d-d6946cb0bfd9] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kube-apiserver-ha-331000-m02" [e67a4f62-2619-4cf2-98cc-4e6d89b875dd] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kube-apiserver-ha-331000-m03" [54e7e79c-00c9-4495-9ce6-7cff1c216b77] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kube-controller-manager-ha-331000" [9e07cbfc-5c4f-4cba-b417-651a0d03f65c] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kube-controller-manager-ha-331000-m02" [80df16bb-edb3-4a03-98db-78c6cfbc2bc2] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kube-controller-manager-ha-331000-m03" [88083c0b-2e89-4b83-80f5-496186f1c17d] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kube-proxy-kt6wd" [b04aa754-6d79-4baa-81e8-215962b8505d] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kube-proxy-mwwp8" [55328dc6-be8b-4916-aeba-2e0548a7bcfd] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kube-proxy-smrc9" [f3ca315f-9042-4fe5-bcb8-4301b3d1ad36] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kube-scheduler-ha-331000" [3ea06d6e-8fa2-42fb-9c05-476e41b94f1b] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kube-scheduler-ha-331000-m02" [c31e63aa-3501-4822-ae81-7f406ac243ae] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kube-scheduler-ha-331000-m03" [790e3732-e3a1-4450-bf45-9cd8bc369180] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kube-vip-ha-331000" [566814e0-10c6-4b7b-a23c-56830dce657d] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kube-vip-ha-331000-m02" [3c5112ed-c7e2-4c7a-a428-26fe4cc807d3] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "kube-vip-ha-331000-m03" [0748fba6-8fdc-46c7-ac09-f0b39aff443d] Running
	I0908 11:23:53.006578    9032 system_pods.go:89] "storage-provisioner" [91f36133-5872-4bf2-9606-697f746f797f] Running
	I0908 11:23:53.006578    9032 system_pods.go:126] duration metric: took 10.9601ms to wait for k8s-apps to be running ...
	I0908 11:23:53.006578    9032 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 11:23:53.017582    9032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:23:53.049859    9032 system_svc.go:56] duration metric: took 43.2805ms WaitForService to wait for kubelet
	I0908 11:23:53.049964    9032 kubeadm.go:578] duration metric: took 20.6720873s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 11:23:53.049964    9032 node_conditions.go:102] verifying NodePressure condition ...
	I0908 11:23:53.056708    9032 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 11:23:53.056708    9032 node_conditions.go:123] node cpu capacity is 2
	I0908 11:23:53.056708    9032 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 11:23:53.056708    9032 node_conditions.go:123] node cpu capacity is 2
	I0908 11:23:53.056708    9032 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 11:23:53.056708    9032 node_conditions.go:123] node cpu capacity is 2
	I0908 11:23:53.056708    9032 node_conditions.go:105] duration metric: took 6.6757ms to run NodePressure ...
	I0908 11:23:53.056708    9032 start.go:241] waiting for startup goroutines ...
	I0908 11:23:53.057239    9032 start.go:255] writing updated cluster config ...
	I0908 11:23:53.068806    9032 ssh_runner.go:195] Run: rm -f paused
	I0908 11:23:53.076828    9032 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 11:23:53.078364    9032 kapi.go:59] client config for ha-331000: &rest.Config{Host:"https://172.20.63.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-331000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-331000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2a967c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0908 11:23:53.095503    9032 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-66pcq" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:53.106839    9032 pod_ready.go:94] pod "coredns-66bc5c9577-66pcq" is "Ready"
	I0908 11:23:53.106922    9032 pod_ready.go:86] duration metric: took 11.419ms for pod "coredns-66bc5c9577-66pcq" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:53.106922    9032 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-x595c" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:53.117946    9032 pod_ready.go:94] pod "coredns-66bc5c9577-x595c" is "Ready"
	I0908 11:23:53.118033    9032 pod_ready.go:86] duration metric: took 11.1107ms for pod "coredns-66bc5c9577-x595c" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:53.124108    9032 pod_ready.go:83] waiting for pod "etcd-ha-331000" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:53.133755    9032 pod_ready.go:94] pod "etcd-ha-331000" is "Ready"
	I0908 11:23:53.133930    9032 pod_ready.go:86] duration metric: took 9.7356ms for pod "etcd-ha-331000" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:53.133930    9032 pod_ready.go:83] waiting for pod "etcd-ha-331000-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:53.144082    9032 pod_ready.go:94] pod "etcd-ha-331000-m02" is "Ready"
	I0908 11:23:53.144082    9032 pod_ready.go:86] duration metric: took 10.1523ms for pod "etcd-ha-331000-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:53.144249    9032 pod_ready.go:83] waiting for pod "etcd-ha-331000-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:53.280143    9032 request.go:683] "Waited before sending request" delay="135.892ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-331000-m03"
	I0908 11:23:53.479621    9032 request.go:683] "Waited before sending request" delay="191.6738ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/nodes/ha-331000-m03"
	I0908 11:23:53.485187    9032 pod_ready.go:94] pod "etcd-ha-331000-m03" is "Ready"
	I0908 11:23:53.485187    9032 pod_ready.go:86] duration metric: took 340.9334ms for pod "etcd-ha-331000-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:53.680162    9032 request.go:683] "Waited before sending request" delay="194.9726ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0908 11:23:53.687445    9032 pod_ready.go:83] waiting for pod "kube-apiserver-ha-331000" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:53.880214    9032 request.go:683] "Waited before sending request" delay="192.4585ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-331000"
	I0908 11:23:54.080391    9032 request.go:683] "Waited before sending request" delay="194.0726ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/nodes/ha-331000"
	I0908 11:23:54.086720    9032 pod_ready.go:94] pod "kube-apiserver-ha-331000" is "Ready"
	I0908 11:23:54.086792    9032 pod_ready.go:86] duration metric: took 399.1595ms for pod "kube-apiserver-ha-331000" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:54.086792    9032 pod_ready.go:83] waiting for pod "kube-apiserver-ha-331000-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:54.280260    9032 request.go:683] "Waited before sending request" delay="193.3033ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-331000-m02"
	I0908 11:23:54.480309    9032 request.go:683] "Waited before sending request" delay="192.7842ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/nodes/ha-331000-m02"
	I0908 11:23:54.486031    9032 pod_ready.go:94] pod "kube-apiserver-ha-331000-m02" is "Ready"
	I0908 11:23:54.486151    9032 pod_ready.go:86] duration metric: took 399.3543ms for pod "kube-apiserver-ha-331000-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:54.486151    9032 pod_ready.go:83] waiting for pod "kube-apiserver-ha-331000-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:54.680168    9032 request.go:683] "Waited before sending request" delay="194.0148ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-331000-m03"
	I0908 11:23:54.880383    9032 request.go:683] "Waited before sending request" delay="194.3689ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/nodes/ha-331000-m03"
	I0908 11:23:54.886671    9032 pod_ready.go:94] pod "kube-apiserver-ha-331000-m03" is "Ready"
	I0908 11:23:54.886706    9032 pod_ready.go:86] duration metric: took 400.5495ms for pod "kube-apiserver-ha-331000-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:55.080306    9032 request.go:683] "Waited before sending request" delay="193.4188ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I0908 11:23:55.089618    9032 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-331000" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:55.280343    9032 request.go:683] "Waited before sending request" delay="190.5616ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-331000"
	I0908 11:23:55.479866    9032 request.go:683] "Waited before sending request" delay="193.5005ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/nodes/ha-331000"
	I0908 11:23:55.488384    9032 pod_ready.go:94] pod "kube-controller-manager-ha-331000" is "Ready"
	I0908 11:23:55.488587    9032 pod_ready.go:86] duration metric: took 398.8704ms for pod "kube-controller-manager-ha-331000" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:55.488587    9032 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-331000-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:55.679634    9032 request.go:683] "Waited before sending request" delay="190.8003ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-331000-m02"
	I0908 11:23:55.880557    9032 request.go:683] "Waited before sending request" delay="190.404ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/nodes/ha-331000-m02"
	I0908 11:23:55.894392    9032 pod_ready.go:94] pod "kube-controller-manager-ha-331000-m02" is "Ready"
	I0908 11:23:55.894468    9032 pod_ready.go:86] duration metric: took 405.8753ms for pod "kube-controller-manager-ha-331000-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:55.894468    9032 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-331000-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:56.079948    9032 request.go:683] "Waited before sending request" delay="185.3781ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-331000-m03"
	I0908 11:23:56.280181    9032 request.go:683] "Waited before sending request" delay="192.5257ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/nodes/ha-331000-m03"
	I0908 11:23:56.285858    9032 pod_ready.go:94] pod "kube-controller-manager-ha-331000-m03" is "Ready"
	I0908 11:23:56.285858    9032 pod_ready.go:86] duration metric: took 391.3853ms for pod "kube-controller-manager-ha-331000-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:56.481138    9032 request.go:683] "Waited before sending request" delay="195.2773ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0908 11:23:56.487072    9032 pod_ready.go:83] waiting for pod "kube-proxy-kt6wd" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:56.679529    9032 request.go:683] "Waited before sending request" delay="191.9104ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kt6wd"
	I0908 11:23:56.881130    9032 request.go:683] "Waited before sending request" delay="194.9441ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/nodes/ha-331000-m03"
	I0908 11:23:56.887188    9032 pod_ready.go:94] pod "kube-proxy-kt6wd" is "Ready"
	I0908 11:23:56.887188    9032 pod_ready.go:86] duration metric: took 399.5675ms for pod "kube-proxy-kt6wd" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:56.887286    9032 pod_ready.go:83] waiting for pod "kube-proxy-mwwp8" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:57.080083    9032 request.go:683] "Waited before sending request" delay="192.7303ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mwwp8"
	I0908 11:23:57.279841    9032 request.go:683] "Waited before sending request" delay="193.1221ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/nodes/ha-331000-m02"
	I0908 11:23:57.286182    9032 pod_ready.go:94] pod "kube-proxy-mwwp8" is "Ready"
	I0908 11:23:57.286182    9032 pod_ready.go:86] duration metric: took 398.8901ms for pod "kube-proxy-mwwp8" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:57.286270    9032 pod_ready.go:83] waiting for pod "kube-proxy-smrc9" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:57.480183    9032 request.go:683] "Waited before sending request" delay="193.8254ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-smrc9"
	I0908 11:23:57.680428    9032 request.go:683] "Waited before sending request" delay="192.727ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/nodes/ha-331000"
	I0908 11:23:57.696508    9032 pod_ready.go:94] pod "kube-proxy-smrc9" is "Ready"
	I0908 11:23:57.696508    9032 pod_ready.go:86] duration metric: took 410.2334ms for pod "kube-proxy-smrc9" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:57.879753    9032 request.go:683] "Waited before sending request" delay="183.1601ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I0908 11:23:57.889927    9032 pod_ready.go:83] waiting for pod "kube-scheduler-ha-331000" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:58.081820    9032 request.go:683] "Waited before sending request" delay="191.891ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-331000"
	I0908 11:23:58.279927    9032 request.go:683] "Waited before sending request" delay="191.6019ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/nodes/ha-331000"
	I0908 11:23:58.289047    9032 pod_ready.go:94] pod "kube-scheduler-ha-331000" is "Ready"
	I0908 11:23:58.289047    9032 pod_ready.go:86] duration metric: took 399.1154ms for pod "kube-scheduler-ha-331000" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:58.289047    9032 pod_ready.go:83] waiting for pod "kube-scheduler-ha-331000-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:58.480439    9032 request.go:683] "Waited before sending request" delay="191.3891ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-331000-m02"
	I0908 11:23:58.679951    9032 request.go:683] "Waited before sending request" delay="192.7459ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/nodes/ha-331000-m02"
	I0908 11:23:58.686736    9032 pod_ready.go:94] pod "kube-scheduler-ha-331000-m02" is "Ready"
	I0908 11:23:58.686736    9032 pod_ready.go:86] duration metric: took 397.684ms for pod "kube-scheduler-ha-331000-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:58.686871    9032 pod_ready.go:83] waiting for pod "kube-scheduler-ha-331000-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:58.880280    9032 request.go:683] "Waited before sending request" delay="193.2557ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-331000-m03"
	I0908 11:23:59.080081    9032 request.go:683] "Waited before sending request" delay="193.6928ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.63.254:8443/api/v1/nodes/ha-331000-m03"
	I0908 11:23:59.085660    9032 pod_ready.go:94] pod "kube-scheduler-ha-331000-m03" is "Ready"
	I0908 11:23:59.085660    9032 pod_ready.go:86] duration metric: took 398.7843ms for pod "kube-scheduler-ha-331000-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:23:59.085746    9032 pod_ready.go:40] duration metric: took 6.0087413s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 11:23:59.219961    9032 start.go:617] kubectl: 1.34.0, cluster: 1.34.0 (minor skew: 0)
	I0908 11:23:59.225671    9032 out.go:179] * Done! kubectl is now configured to use "ha-331000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 08 11:15:11 ha-331000 dockerd[1778]: time="2025-09-08T11:15:11.624782749Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint_count ecebe134df39d4547c205555809223e28f161f54e370b5bd9afeecbf5e78deb3], retrying...."
	Sep 08 11:15:11 ha-331000 dockerd[1778]: time="2025-09-08T11:15:11.724453649Z" level=info msg="Loading containers: done."
	Sep 08 11:15:11 ha-331000 dockerd[1778]: time="2025-09-08T11:15:11.758373149Z" level=info msg="Docker daemon" commit=249d679 containerd-snapshotter=false storage-driver=overlay2 version=28.4.0
	Sep 08 11:15:11 ha-331000 dockerd[1778]: time="2025-09-08T11:15:11.758508849Z" level=info msg="Initializing buildkit"
	Sep 08 11:15:11 ha-331000 dockerd[1778]: time="2025-09-08T11:15:11.789908049Z" level=info msg="Completed buildkit initialization"
	Sep 08 11:15:11 ha-331000 dockerd[1778]: time="2025-09-08T11:15:11.802419249Z" level=info msg="Daemon has completed initialization"
	Sep 08 11:15:11 ha-331000 dockerd[1778]: time="2025-09-08T11:15:11.802470149Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 08 11:15:11 ha-331000 dockerd[1778]: time="2025-09-08T11:15:11.802517749Z" level=info msg="API listen on /run/docker.sock"
	Sep 08 11:15:11 ha-331000 dockerd[1778]: time="2025-09-08T11:15:11.802549449Z" level=info msg="API listen on [::]:2376"
	Sep 08 11:15:11 ha-331000 systemd[1]: Started Docker Application Container Engine.
	Sep 08 11:15:22 ha-331000 cri-dockerd[1645]: time="2025-09-08T11:15:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5ca92f744e82a5520833697e05120addbe2bf45d79b817ec6b2194c8d65c4925/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 11:15:22 ha-331000 cri-dockerd[1645]: time="2025-09-08T11:15:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c666b143621f59134c6e2500d43f1c0d6c810fb14829f60d0ef3233d1fc3cb11/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 11:15:22 ha-331000 cri-dockerd[1645]: time="2025-09-08T11:15:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ac8b5cd3e243ad1b413235794365cee8a292862b7d569b0358c408249ed0e1d9/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 11:15:22 ha-331000 cri-dockerd[1645]: time="2025-09-08T11:15:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e7f286153cad9f9408b8ac2a864859e5fdfe368535ac1ea8d9ea387e5d86e10c/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 11:15:22 ha-331000 cri-dockerd[1645]: time="2025-09-08T11:15:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/39738258a04922bf02296b90594789cacf0e92ea0d9f8e6bca73e8bee7b02a6c/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 11:15:32 ha-331000 cri-dockerd[1645]: time="2025-09-08T11:15:32Z" level=info msg="Stop pulling image ghcr.io/kube-vip/kube-vip:v1.0.0: Status: Downloaded newer image for ghcr.io/kube-vip/kube-vip:v1.0.0"
	Sep 08 11:15:35 ha-331000 cri-dockerd[1645]: time="2025-09-08T11:15:35Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 08 11:15:36 ha-331000 cri-dockerd[1645]: time="2025-09-08T11:15:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9d9aaf6382844361d220a584f72aff747f7b31d3c0ea7448320b07331419c869/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 11:15:37 ha-331000 cri-dockerd[1645]: time="2025-09-08T11:15:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7d644b2de2060828a617429cff42a24609158d29262086069e3c9a74893405e0/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 11:15:44 ha-331000 cri-dockerd[1645]: time="2025-09-08T11:15:44Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20250512-df8de77b: Status: Downloaded newer image for kindest/kindnetd:v20250512-df8de77b"
	Sep 08 11:15:59 ha-331000 cri-dockerd[1645]: time="2025-09-08T11:15:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d9f06ca26bb0d46350387ead567b86c32d03c9cdcfc193aa2b23eeed4c17a82d/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 11:15:59 ha-331000 cri-dockerd[1645]: time="2025-09-08T11:15:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c821f225b0bb599592a36aac7bec4ea340c7f9d2b6b9f1795ec0bebb0f557f45/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 11:15:59 ha-331000 cri-dockerd[1645]: time="2025-09-08T11:15:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6e017b041362ad82b2f50619699fbc7817aa174dcfd11fdd7a477c41ac0cee38/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 11:24:38 ha-331000 cri-dockerd[1645]: time="2025-09-08T11:24:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f5353fd2e31b2d7d5559e16026a8ea6c4407aca4807d3e4c9ee40d27783ac82e/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 08 11:24:40 ha-331000 cri-dockerd[1645]: time="2025-09-08T11:24:40Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	119e4da7957c7       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Running             busybox                   0                   f5353fd2e31b2       busybox-7b57f96db7-9vn9f
	c347d407ba4cb       52546a367cc9e                                                                                         26 minutes ago      Running             coredns                   0                   c821f225b0bb5       coredns-66bc5c9577-x595c
	1af67a1836ec4       52546a367cc9e                                                                                         26 minutes ago      Running             coredns                   0                   d9f06ca26bb0d       coredns-66bc5c9577-66pcq
	28c6f040dbf0e       6e38f40d628db                                                                                         26 minutes ago      Running             storage-provisioner       0                   6e017b041362a       storage-provisioner
	d20041f7a2f04       kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a              27 minutes ago      Running             kindnet-cni               0                   7d644b2de2060       kindnet-s8k98
	97663746caa0b       df0860106674d                                                                                         27 minutes ago      Running             kube-proxy                0                   9d9aaf6382844       kube-proxy-smrc9
	7ce862c8c2cd1       ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c     27 minutes ago      Running             kube-vip                  0                   39738258a0492       kube-vip-ha-331000
	49f5a74368fb6       5f1f5298c888d                                                                                         27 minutes ago      Running             etcd                      0                   e7f286153cad9       etcd-ha-331000
	ea216735dd19d       46169d968e920                                                                                         27 minutes ago      Running             kube-scheduler            0                   ac8b5cd3e243a       kube-scheduler-ha-331000
	ba99e0fd1b296       a0af72f2ec6d6                                                                                         27 minutes ago      Running             kube-controller-manager   0                   c666b143621f5       kube-controller-manager-ha-331000
	7ac2656037f51       90550c43ad2bc                                                                                         27 minutes ago      Running             kube-apiserver            0                   5ca92f744e82a       kube-apiserver-ha-331000
	
	
	==> coredns [1af67a1836ec] <==
	[INFO] 10.244.2.2:33118 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000138102s
	[INFO] 10.244.2.2:36973 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 89 0.002022518s
	[INFO] 10.244.1.2:58575 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000247902s
	[INFO] 10.244.1.2:41526 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000143001s
	[INFO] 10.244.0.4:53000 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.015138836s
	[INFO] 10.244.0.4:32813 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159301s
	[INFO] 10.244.0.4:56346 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109001s
	[INFO] 10.244.2.2:45140 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.013930125s
	[INFO] 10.244.2.2:60260 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106701s
	[INFO] 10.244.2.2:52878 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000105101s
	[INFO] 10.244.1.2:35720 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149801s
	[INFO] 10.244.1.2:34477 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000296403s
	[INFO] 10.244.0.4:37842 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000428103s
	[INFO] 10.244.0.4:33068 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117401s
	[INFO] 10.244.2.2:50512 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124801s
	[INFO] 10.244.2.2:54937 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000163701s
	[INFO] 10.244.2.2:47278 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000164701s
	[INFO] 10.244.2.2:40642 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000223302s
	[INFO] 10.244.1.2:35632 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117701s
	[INFO] 10.244.1.2:49567 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000385703s
	[INFO] 10.244.0.4:36803 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000160301s
	[INFO] 10.244.0.4:57511 - 5 "PTR IN 1.48.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000181002s
	[INFO] 10.244.2.2:47712 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001138s
	[INFO] 10.244.2.2:45821 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000195901s
	[INFO] 10.244.2.2:44190 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000384703s
	
	
	==> coredns [c347d407ba4c] <==
	[INFO] 10.244.1.2:39878 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150901s
	[INFO] 10.244.1.2:54330 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.041521874s
	[INFO] 10.244.1.2:42556 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126801s
	[INFO] 10.244.1.2:53807 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.025529329s
	[INFO] 10.244.1.2:32944 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000122901s
	[INFO] 10.244.1.2:52641 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000361903s
	[INFO] 10.244.0.4:43577 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000270402s
	[INFO] 10.244.0.4:59291 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000191901s
	[INFO] 10.244.0.4:47363 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155401s
	[INFO] 10.244.0.4:50361 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000333803s
	[INFO] 10.244.0.4:59534 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000131001s
	[INFO] 10.244.2.2:58252 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000230902s
	[INFO] 10.244.2.2:40932 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000226602s
	[INFO] 10.244.2.2:38854 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000168101s
	[INFO] 10.244.2.2:33655 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000120201s
	[INFO] 10.244.2.2:48291 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059001s
	[INFO] 10.244.1.2:57084 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158102s
	[INFO] 10.244.1.2:46607 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000202902s
	[INFO] 10.244.0.4:43722 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000157001s
	[INFO] 10.244.0.4:53189 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000275002s
	[INFO] 10.244.1.2:42829 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000220902s
	[INFO] 10.244.1.2:57669 - 5 "PTR IN 1.48.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0000934s
	[INFO] 10.244.0.4:37278 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136301s
	[INFO] 10.244.0.4:54658 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000078901s
	[INFO] 10.244.2.2:56538 - 5 "PTR IN 1.48.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000263502s
	
	
	==> describe nodes <==
	Name:               ha-331000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-331000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64
	                    minikube.k8s.io/name=ha-331000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T11_15_35_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 11:15:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-331000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 11:42:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 11:37:52 +0000   Mon, 08 Sep 2025 11:15:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 11:37:52 +0000   Mon, 08 Sep 2025 11:15:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 11:37:52 +0000   Mon, 08 Sep 2025 11:15:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 11:37:52 +0000   Mon, 08 Sep 2025 11:15:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.59.73
	  Hostname:    ha-331000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976484Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976484Ki
	  pods:               110
	System Info:
	  Machine ID:                 249b21018bea44f699851389d47a9e54
	  System UUID:                9b619134-0a9b-2d4b-8f6c-7910abeef38c
	  Boot ID:                    4d57f7f4-ac7c-4865-ab08-17acbb07b094
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-9vn9f             0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 coredns-66bc5c9577-66pcq             100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     27m
	  kube-system                 coredns-66bc5c9577-x595c             100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     27m
	  kube-system                 etcd-ha-331000                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         27m
	  kube-system                 kindnet-s8k98                        100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      27m
	  kube-system                 kube-apiserver-ha-331000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-controller-manager-ha-331000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-proxy-smrc9                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-scheduler-ha-331000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-vip-ha-331000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (9%)  390Mi (13%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27m                kube-proxy       
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)  kubelet          Node ha-331000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)  kubelet          Node ha-331000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)  kubelet          Node ha-331000 status is now: NodeHasSufficientPID
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           27m                node-controller  Node ha-331000 event: Registered Node ha-331000 in Controller
	  Normal  NodeHasSufficientMemory  27m                kubelet          Node ha-331000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                kubelet          Node ha-331000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m                kubelet          Node ha-331000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                26m                kubelet          Node ha-331000 status is now: NodeReady
	  Normal  RegisteredNode           23m                node-controller  Node ha-331000 event: Registered Node ha-331000 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-331000 event: Registered Node ha-331000 in Controller
	
	
	Name:               ha-331000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-331000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64
	                    minikube.k8s.io/name=ha-331000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_08T11_19_22_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 11:19:21 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-331000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 11:41:40 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 08 Sep 2025 11:37:02 +0000   Mon, 08 Sep 2025 11:42:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 08 Sep 2025 11:37:02 +0000   Mon, 08 Sep 2025 11:42:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 08 Sep 2025 11:37:02 +0000   Mon, 08 Sep 2025 11:42:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 08 Sep 2025 11:37:02 +0000   Mon, 08 Sep 2025 11:42:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.20.54.101
	  Hostname:    ha-331000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	System Info:
	  Machine ID:                 d6dd1047800e48f4b378489e353289dc
	  System UUID:                1218f680-7a65-5643-b365-add7e1fde0c1
	  Boot ID:                    00a93a50-d154-4210-8842-9359f3a59f53
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-2wjzs                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 etcd-ha-331000-m02                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         23m
	  kube-system                 kindnet-mrfp7                            100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      23m
	  kube-system                 kube-apiserver-ha-331000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-controller-manager-ha-331000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-proxy-mwwp8                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-scheduler-ha-331000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-vip-ha-331000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (5%)  50Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        23m   kube-proxy       
	  Normal  RegisteredNode  23m   node-controller  Node ha-331000-m02 event: Registered Node ha-331000-m02 in Controller
	  Normal  RegisteredNode  23m   node-controller  Node ha-331000-m02 event: Registered Node ha-331000-m02 in Controller
	  Normal  RegisteredNode  19m   node-controller  Node ha-331000-m02 event: Registered Node ha-331000-m02 in Controller
	  Normal  NodeNotReady    20s   node-controller  Node ha-331000-m02 status is now: NodeNotReady
	
	
	Name:               ha-331000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-331000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64
	                    minikube.k8s.io/name=ha-331000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_08T11_23_31_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 11:23:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-331000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 11:42:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 11:37:46 +0000   Mon, 08 Sep 2025 11:23:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 11:37:46 +0000   Mon, 08 Sep 2025 11:23:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 11:37:46 +0000   Mon, 08 Sep 2025 11:23:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 11:37:46 +0000   Mon, 08 Sep 2025 11:23:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.56.88
	  Hostname:    ha-331000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	System Info:
	  Machine ID:                 e96735ab0b1c46f99d90f845bc8e1497
	  System UUID:                0b3baf86-40f7-384e-88c6-82fd84416909
	  Boot ID:                    45488445-c5ea-4946-8d4e-4b98b74eca69
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-qhn4b                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 etcd-ha-331000-m03                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         19m
	  kube-system                 kindnet-62t6b                            100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      19m
	  kube-system                 kube-apiserver-ha-331000-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-ha-331000-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-kt6wd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ha-331000-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-vip-ha-331000-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (5%)  50Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        19m   kube-proxy       
	  Normal  RegisteredNode  19m   node-controller  Node ha-331000-m03 event: Registered Node ha-331000-m03 in Controller
	  Normal  RegisteredNode  19m   node-controller  Node ha-331000-m03 event: Registered Node ha-331000-m03 in Controller
	  Normal  RegisteredNode  19m   node-controller  Node ha-331000-m03 event: Registered Node ha-331000-m03 in Controller
	
	
	Name:               ha-331000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-331000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64
	                    minikube.k8s.io/name=ha-331000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_08T11_28_52_0700
	                    minikube.k8s.io/version=v1.36.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 11:28:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-331000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 11:42:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 11:39:13 +0000   Mon, 08 Sep 2025 11:28:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 11:39:13 +0000   Mon, 08 Sep 2025 11:28:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 11:39:13 +0000   Mon, 08 Sep 2025 11:28:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 11:39:13 +0000   Mon, 08 Sep 2025 11:29:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.63.158
	  Hostname:    ha-331000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	System Info:
	  Machine ID:                 8caef9a42e2d499f9c58c88593476921
	  System UUID:                ed169964-c401-9047-8673-73cded947ce3
	  Boot ID:                    2323af1c-29e9-4d1a-8aee-60e73d2d2d3d
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-d4qrk       100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      13m
	  kube-system                 kube-proxy-dlhkm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (1%)  50Mi (1%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node ha-331000-m04 event: Registered Node ha-331000-m04 in Controller
	  Normal  NodeHasSufficientMemory  13m (x4 over 13m)  kubelet          Node ha-331000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x4 over 13m)  kubelet          Node ha-331000-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x4 over 13m)  kubelet          Node ha-331000-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node ha-331000-m04 event: Registered Node ha-331000-m04 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-331000-m04 event: Registered Node ha-331000-m04 in Controller
	  Normal  NodeReady                13m                kubelet          Node ha-331000-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep 8 11:13] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000000] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +0.003125] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.000008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001493] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	              * this clock source is slow. Consider trying other clock sources
	[  +0.148731] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +0.003157] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.016807] (rpcbind)[114]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.560386] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep 8 11:14] kauditd_printk_skb: 96 callbacks suppressed
	[Sep 8 11:15] kauditd_printk_skb: 237 callbacks suppressed
	[  +0.162155] kauditd_printk_skb: 193 callbacks suppressed
	[ +13.121422] kauditd_printk_skb: 174 callbacks suppressed
	[ +11.168531] kauditd_printk_skb: 144 callbacks suppressed
	[  +0.698639] kauditd_printk_skb: 17 callbacks suppressed
	[Sep 8 11:19] kauditd_printk_skb: 92 callbacks suppressed
	[Sep 8 11:23] hrtimer: interrupt took 3015334 ns
	[Sep 8 11:41] kauditd_printk_skb: 20 callbacks suppressed
	
	
	==> etcd [49f5a74368fb] <==
	{"level":"warn","ts":"2025-09-08T11:42:50.772774Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"d9d2266b019978c3","from":"d9d2266b019978c3","remote-peer-id":"4f39c5f386e4c391","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-08T11:42:50.778151Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"d9d2266b019978c3","from":"d9d2266b019978c3","remote-peer-id":"4f39c5f386e4c391","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-08T11:42:50.783080Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"d9d2266b019978c3","from":"d9d2266b019978c3","remote-peer-id":"4f39c5f386e4c391","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-08T11:42:50.788140Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"d9d2266b019978c3","from":"d9d2266b019978c3","remote-peer-id":"4f39c5f386e4c391","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-08T11:42:50.811262Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"d9d2266b019978c3","from":"d9d2266b019978c3","remote-peer-id":"4f39c5f386e4c391","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-08T11:42:50.835093Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"d9d2266b019978c3","from":"d9d2266b019978c3","remote-peer-id":"4f39c5f386e4c391","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-08T11:42:50.839224Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"d9d2266b019978c3","from":"d9d2266b019978c3","remote-peer-id":"4f39c5f386e4c391","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-08T11:42:50.884452Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"d9d2266b019978c3","from":"d9d2266b019978c3","remote-peer-id":"4f39c5f386e4c391","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-08T11:42:50.890273Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"d9d2266b019978c3","from":"d9d2266b019978c3","remote-peer-id":"4f39c5f386e4c391","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-08T11:42:50.895365Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"d9d2266b019978c3","from":"d9d2266b019978c3","remote-peer-id":"4f39c5f386e4c391","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-08T11:42:50.896750Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"d9d2266b019978c3","from":"d9d2266b019978c3","remote-peer-id":"4f39c5f386e4c391","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-08T11:42:50.899484Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"d9d2266b019978c3","from":"d9d2266b019978c3","remote-peer-id":"4f39c5f386e4c391","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-08T11:42:50.902914Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"d9d2266b019978c3","from":"d9d2266b019978c3","remote-peer-id":"4f39c5f386e4c391","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-08T11:42:50.908810Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"d9d2266b019978c3","from":"d9d2266b019978c3","remote-peer-id":"4f39c5f386e4c391","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-08T11:42:50.911536Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"d9d2266b019978c3","from":"d9d2266b019978c3","remote-peer-id":"4f39c5f386e4c391","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-08T11:42:50.921956Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"d9d2266b019978c3","from":"d9d2266b019978c3","remote-peer-id":"4f39c5f386e4c391","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-08T11:42:50.932271Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"d9d2266b019978c3","from":"d9d2266b019978c3","remote-peer-id":"4f39c5f386e4c391","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-08T11:42:50.937690Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"d9d2266b019978c3","from":"d9d2266b019978c3","remote-peer-id":"4f39c5f386e4c391","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-08T11:42:50.943214Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"d9d2266b019978c3","from":"d9d2266b019978c3","remote-peer-id":"4f39c5f386e4c391","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-08T11:42:50.947910Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"d9d2266b019978c3","from":"d9d2266b019978c3","remote-peer-id":"4f39c5f386e4c391","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-08T11:42:50.958972Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"d9d2266b019978c3","from":"d9d2266b019978c3","remote-peer-id":"4f39c5f386e4c391","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-08T11:42:50.968485Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"d9d2266b019978c3","from":"d9d2266b019978c3","remote-peer-id":"4f39c5f386e4c391","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-08T11:42:50.969806Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"d9d2266b019978c3","from":"d9d2266b019978c3","remote-peer-id":"4f39c5f386e4c391","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-08T11:42:51.012193Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"d9d2266b019978c3","from":"d9d2266b019978c3","remote-peer-id":"4f39c5f386e4c391","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-08T11:42:51.027176Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"d9d2266b019978c3","from":"d9d2266b019978c3","remote-peer-id":"4f39c5f386e4c391","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:42:51 up 29 min,  0 users,  load average: 1.42, 0.83, 0.56
	Linux ha-331000 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep  4 13:14:36 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kindnet [d20041f7a2f0] <==
	I0908 11:42:16.296833       1 main.go:324] Node ha-331000-m04 has CIDR [10.244.3.0/24] 
	I0908 11:42:26.296695       1 main.go:297] Handling node with IPs: map[172.20.59.73:{}]
	I0908 11:42:26.296763       1 main.go:301] handling current node
	I0908 11:42:26.296999       1 main.go:297] Handling node with IPs: map[172.20.54.101:{}]
	I0908 11:42:26.297016       1 main.go:324] Node ha-331000-m02 has CIDR [10.244.1.0/24] 
	I0908 11:42:26.297218       1 main.go:297] Handling node with IPs: map[172.20.56.88:{}]
	I0908 11:42:26.297254       1 main.go:324] Node ha-331000-m03 has CIDR [10.244.2.0/24] 
	I0908 11:42:26.297418       1 main.go:297] Handling node with IPs: map[172.20.63.158:{}]
	I0908 11:42:26.297428       1 main.go:324] Node ha-331000-m04 has CIDR [10.244.3.0/24] 
	I0908 11:42:36.295727       1 main.go:297] Handling node with IPs: map[172.20.63.158:{}]
	I0908 11:42:36.295762       1 main.go:324] Node ha-331000-m04 has CIDR [10.244.3.0/24] 
	I0908 11:42:36.295943       1 main.go:297] Handling node with IPs: map[172.20.59.73:{}]
	I0908 11:42:36.296027       1 main.go:301] handling current node
	I0908 11:42:36.296237       1 main.go:297] Handling node with IPs: map[172.20.54.101:{}]
	I0908 11:42:36.296377       1 main.go:324] Node ha-331000-m02 has CIDR [10.244.1.0/24] 
	I0908 11:42:36.296970       1 main.go:297] Handling node with IPs: map[172.20.56.88:{}]
	I0908 11:42:36.297167       1 main.go:324] Node ha-331000-m03 has CIDR [10.244.2.0/24] 
	I0908 11:42:46.295506       1 main.go:297] Handling node with IPs: map[172.20.59.73:{}]
	I0908 11:42:46.295535       1 main.go:301] handling current node
	I0908 11:42:46.295550       1 main.go:297] Handling node with IPs: map[172.20.54.101:{}]
	I0908 11:42:46.295556       1 main.go:324] Node ha-331000-m02 has CIDR [10.244.1.0/24] 
	I0908 11:42:46.295939       1 main.go:297] Handling node with IPs: map[172.20.56.88:{}]
	I0908 11:42:46.295949       1 main.go:324] Node ha-331000-m03 has CIDR [10.244.2.0/24] 
	I0908 11:42:46.296674       1 main.go:297] Handling node with IPs: map[172.20.63.158:{}]
	I0908 11:42:46.296689       1 main.go:324] Node ha-331000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [7ac2656037f5] <==
	I0908 11:29:02.983936       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:29:32.505893       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:30:09.811765       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:30:57.922170       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:31:19.569741       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:32:14.519116       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:32:48.689226       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:33:38.164378       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:34:11.035754       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:34:57.966017       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:35:11.110798       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:35:26.875854       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 11:36:10.696744       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:36:11.939208       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:37:24.480018       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:37:27.266023       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:38:37.396517       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:38:41.780124       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:39:42.832996       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:39:52.190248       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:40:49.093292       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:41:08.881239       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0908 11:41:49.698670       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.20.56.88 172.20.59.73]
	I0908 11:41:56.989971       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:42:25.797969       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [ba99e0fd1b29] <==
	I0908 11:15:35.147169       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0908 11:15:35.153373       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 11:15:35.161064       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0908 11:15:35.163458       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0908 11:15:35.164032       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-331000" podCIDRs=["10.244.0.0/24"]
	I0908 11:15:35.189439       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0908 11:15:35.192020       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0908 11:15:35.196876       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 11:15:35.196893       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0908 11:15:35.196909       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0908 11:15:35.207865       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0908 11:15:35.214436       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 11:16:00.129999       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0908 11:19:21.180618       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-331000-m02\" does not exist"
	I0908 11:19:21.276021       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-331000-m02" podCIDRs=["10.244.1.0/24"]
	I0908 11:19:25.171635       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-331000-m02"
	I0908 11:23:30.607666       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-331000-m03\" does not exist"
	I0908 11:23:30.672032       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-331000-m03" podCIDRs=["10.244.2.0/24"]
	I0908 11:23:35.464567       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-331000-m03"
	E0908 11:28:51.281785       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-dl524 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-dl524\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0908 11:28:51.467735       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-331000-m04\" does not exist"
	I0908 11:28:51.498587       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-331000-m04" podCIDRs=["10.244.3.0/24"]
	I0908 11:28:55.563818       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-331000-m04"
	I0908 11:29:19.932940       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-331000-m04"
	I0908 11:42:30.774048       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-331000-m04"
	
	
	==> kube-proxy [97663746caa0] <==
	I0908 11:15:37.550833       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 11:15:37.651533       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 11:15:37.651635       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["172.20.59.73"]
	E0908 11:15:37.651830       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 11:15:37.707963       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0908 11:15:37.708058       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0908 11:15:37.708087       1 server_linux.go:132] "Using iptables Proxier"
	I0908 11:15:37.721544       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 11:15:37.722140       1 server.go:527] "Version info" version="v1.34.0"
	I0908 11:15:37.722160       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:15:37.728460       1 config.go:200] "Starting service config controller"
	I0908 11:15:37.731827       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 11:15:37.732282       1 config.go:106] "Starting endpoint slice config controller"
	I0908 11:15:37.732659       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 11:15:37.732832       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 11:15:37.733087       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 11:15:37.731055       1 config.go:309] "Starting node config controller"
	I0908 11:15:37.734538       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 11:15:37.734611       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 11:15:37.833458       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 11:15:37.833458       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 11:15:37.833490       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ea216735dd19] <==
	E0908 11:15:28.329165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0908 11:15:28.365273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0908 11:15:28.440804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0908 11:15:28.511098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0908 11:15:28.519525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0908 11:15:28.573194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0908 11:15:28.603573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I0908 11:15:29.806937       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0908 11:19:21.318623       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mrfp7\": pod kindnet-mrfp7 is already assigned to node \"ha-331000-m02\"" plugin="DefaultBinder" pod="kube-system/kindnet-mrfp7" node="ha-331000-m02"
	E0908 11:19:21.319527       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mrfp7\": pod kindnet-mrfp7 is already assigned to node \"ha-331000-m02\"" logger="UnhandledError" pod="kube-system/kindnet-mrfp7"
	E0908 11:19:21.320883       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-mwwp8\": pod kube-proxy-mwwp8 is already assigned to node \"ha-331000-m02\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-mwwp8" node="ha-331000-m02"
	E0908 11:19:21.320966       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-mwwp8\": pod kube-proxy-mwwp8 is already assigned to node \"ha-331000-m02\"" logger="UnhandledError" pod="kube-system/kube-proxy-mwwp8"
	I0908 11:19:21.323382       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-mwwp8" node="ha-331000-m02"
	E0908 11:23:30.788937       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-kt6wd\": pod kube-proxy-kt6wd is already assigned to node \"ha-331000-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-kt6wd" node="ha-331000-m03"
	E0908 11:23:30.789039       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod b04aa754-6d79-4baa-81e8-215962b8505d(kube-system/kube-proxy-kt6wd) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-kt6wd"
	E0908 11:23:30.789068       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-kt6wd\": pod kube-proxy-kt6wd is already assigned to node \"ha-331000-m03\"" logger="UnhandledError" pod="kube-system/kube-proxy-kt6wd"
	I0908 11:23:30.790368       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-kt6wd" node="ha-331000-m03"
	E0908 11:23:30.809903       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-lp8fx\": pod kube-proxy-lp8fx is already assigned to node \"ha-331000-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-lp8fx" node="ha-331000-m03"
	E0908 11:23:30.809968       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod d8be3dbd-99de-407b-a910-e39dbe6edb38(kube-system/kube-proxy-lp8fx) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-lp8fx"
	E0908 11:23:30.809987       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-lp8fx\": pod kube-proxy-lp8fx is already assigned to node \"ha-331000-m03\"" logger="UnhandledError" pod="kube-system/kube-proxy-lp8fx"
	I0908 11:23:30.812920       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-lp8fx" node="ha-331000-m03"
	E0908 11:28:51.641684       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-7cbf4\": pod kindnet-7cbf4 is already assigned to node \"ha-331000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-7cbf4" node="ha-331000-m04"
	E0908 11:28:51.643518       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 20eb4e22-f57b-407d-9d8c-76daf7bc90e0(kube-system/kindnet-7cbf4) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-7cbf4"
	E0908 11:28:51.643627       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-7cbf4\": pod kindnet-7cbf4 is already assigned to node \"ha-331000-m04\"" logger="UnhandledError" pod="kube-system/kindnet-7cbf4"
	I0908 11:28:51.645034       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-7cbf4" node="ha-331000-m04"
	
	
	==> kubelet <==
	Sep 08 11:15:36 ha-331000 kubelet[2904]: I0908 11:15:36.007864    2904 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ha-331000"
	Sep 08 11:15:36 ha-331000 kubelet[2904]: I0908 11:15:36.008235    2904 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ha-331000"
	Sep 08 11:15:36 ha-331000 kubelet[2904]: E0908 11:15:36.070629    2904 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-331000\" already exists" pod="kube-system/kube-apiserver-ha-331000"
	Sep 08 11:15:36 ha-331000 kubelet[2904]: E0908 11:15:36.078805    2904 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-331000\" already exists" pod="kube-system/kube-controller-manager-ha-331000"
	Sep 08 11:15:36 ha-331000 kubelet[2904]: E0908 11:15:36.108765    2904 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ha-331000\" already exists" pod="kube-system/kube-scheduler-ha-331000"
	Sep 08 11:15:36 ha-331000 kubelet[2904]: I0908 11:15:36.295037    2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-ha-331000" podStartSLOduration=1.2950197669999999 podStartE2EDuration="1.295019767s" podCreationTimestamp="2025-09-08 11:15:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 11:15:36.21773243 +0000 UTC m=+1.639789974" watchObservedRunningTime="2025-09-08 11:15:36.295019767 +0000 UTC m=+1.717077211"
	Sep 08 11:15:36 ha-331000 kubelet[2904]: I0908 11:15:36.457689    2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ha-331000" podStartSLOduration=1.457635961 podStartE2EDuration="1.457635961s" podCreationTimestamp="2025-09-08 11:15:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 11:15:36.417024563 +0000 UTC m=+1.839082107" watchObservedRunningTime="2025-09-08 11:15:36.457635961 +0000 UTC m=+1.879693505"
	Sep 08 11:15:37 ha-331000 kubelet[2904]: I0908 11:15:37.401301    2904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d644b2de2060828a617429cff42a24609158d29262086069e3c9a74893405e0"
	Sep 08 11:15:38 ha-331000 kubelet[2904]: I0908 11:15:38.452832    2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-smrc9" podStartSLOduration=3.4528146 podStartE2EDuration="3.4528146s" podCreationTimestamp="2025-09-08 11:15:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 11:15:38.452749999 +0000 UTC m=+3.874807543" watchObservedRunningTime="2025-09-08 11:15:38.4528146 +0000 UTC m=+3.874872044"
	Sep 08 11:15:58 ha-331000 kubelet[2904]: I0908 11:15:58.597908    2904 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Sep 08 11:15:58 ha-331000 kubelet[2904]: I0908 11:15:58.832978    2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-s8k98" podStartSLOduration=16.970387441 podStartE2EDuration="23.832960357s" podCreationTimestamp="2025-09-08 11:15:35 +0000 UTC" firstStartedPulling="2025-09-08 11:15:37.405014632 +0000 UTC m=+2.827072076" lastFinishedPulling="2025-09-08 11:15:44.267587548 +0000 UTC m=+9.689644992" observedRunningTime="2025-09-08 11:15:46.683632792 +0000 UTC m=+12.105690336" watchObservedRunningTime="2025-09-08 11:15:58.832960357 +0000 UTC m=+24.255017901"
	Sep 08 11:15:58 ha-331000 kubelet[2904]: I0908 11:15:58.891829    2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ppmw\" (UniqueName: \"kubernetes.io/projected/7d55f59c-2274-4acf-88e6-9d8249a799ec-kube-api-access-9ppmw\") pod \"coredns-66bc5c9577-66pcq\" (UID: \"7d55f59c-2274-4acf-88e6-9d8249a799ec\") " pod="kube-system/coredns-66bc5c9577-66pcq"
	Sep 08 11:15:58 ha-331000 kubelet[2904]: I0908 11:15:58.891874    2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgctn\" (UniqueName: \"kubernetes.io/projected/91f36133-5872-4bf2-9606-697f746f797f-kube-api-access-dgctn\") pod \"storage-provisioner\" (UID: \"91f36133-5872-4bf2-9606-697f746f797f\") " pod="kube-system/storage-provisioner"
	Sep 08 11:15:58 ha-331000 kubelet[2904]: I0908 11:15:58.891905    2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lzs5\" (UniqueName: \"kubernetes.io/projected/bfc5c253-e38e-4a3f-94b9-fb077529ad73-kube-api-access-4lzs5\") pod \"coredns-66bc5c9577-x595c\" (UID: \"bfc5c253-e38e-4a3f-94b9-fb077529ad73\") " pod="kube-system/coredns-66bc5c9577-x595c"
	Sep 08 11:15:58 ha-331000 kubelet[2904]: I0908 11:15:58.891926    2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d55f59c-2274-4acf-88e6-9d8249a799ec-config-volume\") pod \"coredns-66bc5c9577-66pcq\" (UID: \"7d55f59c-2274-4acf-88e6-9d8249a799ec\") " pod="kube-system/coredns-66bc5c9577-66pcq"
	Sep 08 11:15:58 ha-331000 kubelet[2904]: I0908 11:15:58.891951    2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/91f36133-5872-4bf2-9606-697f746f797f-tmp\") pod \"storage-provisioner\" (UID: \"91f36133-5872-4bf2-9606-697f746f797f\") " pod="kube-system/storage-provisioner"
	Sep 08 11:15:58 ha-331000 kubelet[2904]: I0908 11:15:58.891972    2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfc5c253-e38e-4a3f-94b9-fb077529ad73-config-volume\") pod \"coredns-66bc5c9577-x595c\" (UID: \"bfc5c253-e38e-4a3f-94b9-fb077529ad73\") " pod="kube-system/coredns-66bc5c9577-x595c"
	Sep 08 11:15:59 ha-331000 kubelet[2904]: I0908 11:15:59.784449    2904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c821f225b0bb599592a36aac7bec4ea340c7f9d2b6b9f1795ec0bebb0f557f45"
	Sep 08 11:15:59 ha-331000 kubelet[2904]: I0908 11:15:59.861879    2904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e017b041362ad82b2f50619699fbc7817aa174dcfd11fdd7a477c41ac0cee38"
	Sep 08 11:15:59 ha-331000 kubelet[2904]: I0908 11:15:59.895963    2904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9f06ca26bb0d46350387ead567b86c32d03c9cdcfc193aa2b23eeed4c17a82d"
	Sep 08 11:16:00 ha-331000 kubelet[2904]: I0908 11:16:00.955110    2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-x595c" podStartSLOduration=24.955093874 podStartE2EDuration="24.955093874s" podCreationTimestamp="2025-09-08 11:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 11:16:00.95040786 +0000 UTC m=+26.372465304" watchObservedRunningTime="2025-09-08 11:16:00.955093874 +0000 UTC m=+26.377151418"
	Sep 08 11:16:01 ha-331000 kubelet[2904]: I0908 11:16:01.042554    2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.04253744 podStartE2EDuration="16.04253744s" podCreationTimestamp="2025-09-08 11:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 11:16:01.004134327 +0000 UTC m=+26.426191871" watchObservedRunningTime="2025-09-08 11:16:01.04253744 +0000 UTC m=+26.464594884"
	Sep 08 11:16:01 ha-331000 kubelet[2904]: I0908 11:16:01.080767    2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-66pcq" podStartSLOduration=25.080750152 podStartE2EDuration="25.080750152s" podCreationTimestamp="2025-09-08 11:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 11:16:01.080509651 +0000 UTC m=+26.502567195" watchObservedRunningTime="2025-09-08 11:16:01.080750152 +0000 UTC m=+26.502807596"
	Sep 08 11:24:37 ha-331000 kubelet[2904]: I0908 11:24:37.101545    2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pkj7\" (UniqueName: \"kubernetes.io/projected/54e7a78b-44aa-46cb-a877-dc73d8d83565-kube-api-access-6pkj7\") pod \"busybox-7b57f96db7-9vn9f\" (UID: \"54e7a78b-44aa-46cb-a877-dc73d8d83565\") " pod="default/busybox-7b57f96db7-9vn9f"
	Sep 08 11:24:38 ha-331000 kubelet[2904]: I0908 11:24:38.196272    2904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5353fd2e31b2d7d5559e16026a8ea6c4407aca4807d3e4c9ee40d27783ac82e"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-331000 -n ha-331000
helpers_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-331000 -n ha-331000: (12.3224321s)
helpers_test.go:269: (dbg) Run:  kubectl --context ha-331000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (96.59s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (56.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-818700 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-818700 -- exec busybox-7b57f96db7-ndqg5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-818700 -- exec busybox-7b57f96db7-ndqg5 -- sh -c "ping -c 1 172.20.48.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-818700 -- exec busybox-7b57f96db7-ndqg5 -- sh -c "ping -c 1 172.20.48.1": exit status 1 (10.5019315s)

                                                
                                                
-- stdout --
	PING 172.20.48.1 (172.20.48.1): 56 data bytes
	
	--- 172.20.48.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.20.48.1) from pod (busybox-7b57f96db7-ndqg5): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-818700 -- exec busybox-7b57f96db7-ztvwm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-818700 -- exec busybox-7b57f96db7-ztvwm -- sh -c "ping -c 1 172.20.48.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-818700 -- exec busybox-7b57f96db7-ztvwm -- sh -c "ping -c 1 172.20.48.1": exit status 1 (10.5035175s)

                                                
                                                
-- stdout --
	PING 172.20.48.1 (172.20.48.1): 56 data bytes
	
	--- 172.20.48.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.20.48.1) from pod (busybox-7b57f96db7-ztvwm): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-818700 -n multinode-818700
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-818700 -n multinode-818700: (12.2840858s)
helpers_test.go:252: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 logs -n 25: (8.6805222s)
helpers_test.go:260: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                         ARGS                                                                                                         │       PROFILE        │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ mount-start-2-476900 ssh -- ls /minikube-host                                                                                                                                                                        │ mount-start-2-476900 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:09 UTC │ 08 Sep 25 12:09 UTC │
	│ delete  │ -p mount-start-1-476900 --alsologtostderr -v=5                                                                                                                                                                       │ mount-start-1-476900 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:09 UTC │ 08 Sep 25 12:09 UTC │
	│ ssh     │ mount-start-2-476900 ssh -- ls /minikube-host                                                                                                                                                                        │ mount-start-2-476900 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:09 UTC │ 08 Sep 25 12:10 UTC │
	│ stop    │ -p mount-start-2-476900                                                                                                                                                                                              │ mount-start-2-476900 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:10 UTC │ 08 Sep 25 12:10 UTC │
	│ start   │ -p mount-start-2-476900                                                                                                                                                                                              │ mount-start-2-476900 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:10 UTC │ 08 Sep 25 12:12 UTC │
	│ mount   │ C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMountStartserial3517297216\001:/minikube-host --profile mount-start-2-476900 --v 0 --9p-version 9p2000.L --gid 0 --ip  --msize 6543 --port 46465 --type 9p --uid 0 │ mount-start-2-476900 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:12 UTC │                     │
	│ ssh     │ mount-start-2-476900 ssh -- ls /minikube-host                                                                                                                                                                        │ mount-start-2-476900 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:12 UTC │ 08 Sep 25 12:12 UTC │
	│ delete  │ -p mount-start-2-476900                                                                                                                                                                                              │ mount-start-2-476900 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:12 UTC │ 08 Sep 25 12:13 UTC │
	│ delete  │ -p mount-start-1-476900                                                                                                                                                                                              │ mount-start-1-476900 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:13 UTC │ 08 Sep 25 12:13 UTC │
	│ start   │ -p multinode-818700 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=hyperv                                                                                                                       │ multinode-818700     │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:13 UTC │ 08 Sep 25 12:19 UTC │
	│ kubectl │ -p multinode-818700 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml                                                                                                                                    │ multinode-818700     │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:20 UTC │ 08 Sep 25 12:20 UTC │
	│ kubectl │ -p multinode-818700 -- rollout status deployment/busybox                                                                                                                                                             │ multinode-818700     │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:20 UTC │ 08 Sep 25 12:20 UTC │
	│ kubectl │ -p multinode-818700 -- get pods -o jsonpath='{.items[*].status.podIP}'                                                                                                                                               │ multinode-818700     │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:20 UTC │ 08 Sep 25 12:20 UTC │
	│ kubectl │ -p multinode-818700 -- get pods -o jsonpath='{.items[*].metadata.name}'                                                                                                                                              │ multinode-818700     │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:20 UTC │ 08 Sep 25 12:20 UTC │
	│ kubectl │ -p multinode-818700 -- exec busybox-7b57f96db7-ndqg5 -- nslookup kubernetes.io                                                                                                                                       │ multinode-818700     │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:20 UTC │ 08 Sep 25 12:20 UTC │
	│ kubectl │ -p multinode-818700 -- exec busybox-7b57f96db7-ztvwm -- nslookup kubernetes.io                                                                                                                                       │ multinode-818700     │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:20 UTC │ 08 Sep 25 12:20 UTC │
	│ kubectl │ -p multinode-818700 -- exec busybox-7b57f96db7-ndqg5 -- nslookup kubernetes.default                                                                                                                                  │ multinode-818700     │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:20 UTC │ 08 Sep 25 12:20 UTC │
	│ kubectl │ -p multinode-818700 -- exec busybox-7b57f96db7-ztvwm -- nslookup kubernetes.default                                                                                                                                  │ multinode-818700     │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:20 UTC │ 08 Sep 25 12:20 UTC │
	│ kubectl │ -p multinode-818700 -- exec busybox-7b57f96db7-ndqg5 -- nslookup kubernetes.default.svc.cluster.local                                                                                                                │ multinode-818700     │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:20 UTC │ 08 Sep 25 12:20 UTC │
	│ kubectl │ -p multinode-818700 -- exec busybox-7b57f96db7-ztvwm -- nslookup kubernetes.default.svc.cluster.local                                                                                                                │ multinode-818700     │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:20 UTC │ 08 Sep 25 12:20 UTC │
	│ kubectl │ -p multinode-818700 -- get pods -o jsonpath='{.items[*].metadata.name}'                                                                                                                                              │ multinode-818700     │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:20 UTC │ 08 Sep 25 12:20 UTC │
	│ kubectl │ -p multinode-818700 -- exec busybox-7b57f96db7-ndqg5 -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3                                                                                          │ multinode-818700     │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:20 UTC │ 08 Sep 25 12:20 UTC │
	│ kubectl │ -p multinode-818700 -- exec busybox-7b57f96db7-ndqg5 -- sh -c ping -c 1 172.20.48.1                                                                                                                                  │ multinode-818700     │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:20 UTC │                     │
	│ kubectl │ -p multinode-818700 -- exec busybox-7b57f96db7-ztvwm -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3                                                                                          │ multinode-818700     │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:20 UTC │ 08 Sep 25 12:20 UTC │
	│ kubectl │ -p multinode-818700 -- exec busybox-7b57f96db7-ztvwm -- sh -c ping -c 1 172.20.48.1                                                                                                                                  │ multinode-818700     │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:20 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 12:13:05
	Running on machine: minikube6
	Binary: Built with gc go1.24.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 12:13:05.531115    7416 out.go:360] Setting OutFile to fd 1932 ...
	I0908 12:13:05.618846    7416 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:13:05.618994    7416 out.go:374] Setting ErrFile to fd 1952...
	I0908 12:13:05.619055    7416 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:13:05.640559    7416 out.go:368] Setting JSON to false
	I0908 12:13:05.645570    7416 start.go:130] hostinfo: {"hostname":"minikube6","uptime":302437,"bootTime":1757031148,"procs":183,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6282 Build 19045.6282","kernelVersion":"10.0.19045.6282 Build 19045.6282","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0908 12:13:05.645570    7416 start.go:138] gopshost.Virtualization returned error: not implemented yet
	I0908 12:13:05.651571    7416 out.go:179] * [multinode-818700] minikube v1.36.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
	I0908 12:13:05.656566    7416 notify.go:220] Checking for updates...
	I0908 12:13:05.656566    7416 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0908 12:13:05.659570    7416 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 12:13:05.661580    7416 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0908 12:13:05.665584    7416 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 12:13:05.668555    7416 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 12:13:05.672610    7416 config.go:182] Loaded profile config "ha-331000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:13:05.673562    7416 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 12:13:10.857468    7416 out.go:179] * Using the hyperv driver based on user configuration
	I0908 12:13:10.861779    7416 start.go:304] selected driver: hyperv
	I0908 12:13:10.861779    7416 start.go:918] validating driver "hyperv" against <nil>
	I0908 12:13:10.861779    7416 start.go:929] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 12:13:10.909598    7416 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 12:13:10.910611    7416 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 12:13:10.910611    7416 cni.go:84] Creating CNI manager for ""
	I0908 12:13:10.910611    7416 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0908 12:13:10.910611    7416 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0908 12:13:10.910611    7416 start.go:348] cluster config:
	{Name:multinode-818700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:multinode-818700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:13:10.911595    7416 iso.go:125] acquiring lock: {Name:mk0c8af595f03ef7f7ea249099688f084dfd77f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 12:13:10.918594    7416 out.go:179] * Starting "multinode-818700" primary control-plane node in "multinode-818700" cluster
	I0908 12:13:10.922596    7416 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0908 12:13:10.922596    7416 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0908 12:13:10.922596    7416 cache.go:58] Caching tarball of preloaded images
	I0908 12:13:10.922596    7416 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0908 12:13:10.922596    7416 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0908 12:13:10.922596    7416 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\config.json ...
	I0908 12:13:10.923595    7416 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\config.json: {Name:mk7ac06c5ac2d6fe11668a815e3dff04b95ab9be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:13:10.924594    7416 start.go:360] acquireMachinesLock for multinode-818700: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0908 12:13:10.924594    7416 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-818700"
	I0908 12:13:10.924594    7416 start.go:93] Provisioning new machine with config: &{Name:multinode-818700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.34.0 ClusterName:multinode-818700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0908 12:13:10.924594    7416 start.go:125] createHost starting for "" (driver="hyperv")
	I0908 12:13:10.929643    7416 out.go:252] * Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0908 12:13:10.929643    7416 start.go:159] libmachine.API.Create for "multinode-818700" (driver="hyperv")
	I0908 12:13:10.929643    7416 client.go:168] LocalClient.Create starting
	I0908 12:13:10.930604    7416 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0908 12:13:10.930604    7416 main.go:141] libmachine: Decoding PEM data...
	I0908 12:13:10.930604    7416 main.go:141] libmachine: Parsing certificate...
	I0908 12:13:10.930604    7416 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0908 12:13:10.931595    7416 main.go:141] libmachine: Decoding PEM data...
	I0908 12:13:10.931595    7416 main.go:141] libmachine: Parsing certificate...
	I0908 12:13:10.931595    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0908 12:13:12.958976    7416 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0908 12:13:12.958976    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:13:12.958976    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0908 12:13:14.712442    7416 main.go:141] libmachine: [stdout =====>] : False
	
	I0908 12:13:14.713723    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:13:14.713723    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0908 12:13:16.190574    7416 main.go:141] libmachine: [stdout =====>] : True
	
	I0908 12:13:16.190574    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:13:16.191635    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0908 12:13:19.732832    7416 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0908 12:13:19.732832    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:13:19.736192    7416 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso...
	I0908 12:13:20.335690    7416 main.go:141] libmachine: Creating SSH key...
	I0908 12:13:20.413055    7416 main.go:141] libmachine: Creating VM...
	I0908 12:13:20.413055    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0908 12:13:23.215504    7416 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0908 12:13:23.215504    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:13:23.215911    7416 main.go:141] libmachine: Using switch "Default Switch"
	I0908 12:13:23.215954    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0908 12:13:24.968476    7416 main.go:141] libmachine: [stdout =====>] : True
	
	I0908 12:13:24.968711    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:13:24.968771    7416 main.go:141] libmachine: Creating VHD
	I0908 12:13:24.968771    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700\fixed.vhd' -SizeBytes 10MB -Fixed
	I0908 12:13:28.620144    7416 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 4858F8C3-5886-4FBE-B1D8-1FC582ED9A49
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0908 12:13:28.620360    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:13:28.620360    7416 main.go:141] libmachine: Writing magic tar header
	I0908 12:13:28.620506    7416 main.go:141] libmachine: Writing SSH key tar header
	I0908 12:13:28.635664    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700\disk.vhd' -VHDType Dynamic -DeleteSource
	I0908 12:13:31.751983    7416 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:13:31.752721    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:13:31.752780    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700\disk.vhd' -SizeBytes 20000MB
	I0908 12:13:34.251109    7416 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:13:34.251109    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:13:34.251109    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-818700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700' -SwitchName 'Default Switch' -MemoryStartupBytes 3072MB
	I0908 12:13:37.819675    7416 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-818700 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0908 12:13:37.819675    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:13:37.819819    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-818700 -DynamicMemoryEnabled $false
	I0908 12:13:40.031243    7416 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:13:40.031243    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:13:40.031831    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-818700 -Count 2
	I0908 12:13:42.111149    7416 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:13:42.111149    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:13:42.112237    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-818700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700\boot2docker.iso'
	I0908 12:13:44.636007    7416 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:13:44.636007    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:13:44.636378    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-818700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700\disk.vhd'
	I0908 12:13:47.230590    7416 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:13:47.230590    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:13:47.230590    7416 main.go:141] libmachine: Starting VM...
	I0908 12:13:47.231302    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-818700
	I0908 12:13:50.296525    7416 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:13:50.296525    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:13:50.296525    7416 main.go:141] libmachine: Waiting for host to start...
	I0908 12:13:50.297622    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:13:52.438595    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:13:52.439323    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:13:52.439431    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:13:54.890157    7416 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:13:54.890157    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:13:55.891205    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:13:58.086004    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:13:58.086004    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:13:58.086004    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:14:00.580044    7416 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:14:00.580044    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:14:01.580874    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:14:03.729615    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:14:03.729770    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:14:03.729931    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:14:06.220531    7416 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:14:06.220531    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:14:07.221328    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:14:09.416487    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:14:09.416795    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:14:09.416929    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:14:11.867176    7416 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:14:11.867176    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:14:12.868328    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:14:15.029795    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:14:15.030664    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:14:15.030904    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:14:17.849688    7416 main.go:141] libmachine: [stdout =====>] : 172.20.50.55
	
	I0908 12:14:17.849688    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:14:17.849975    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:14:20.063892    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:14:20.063929    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:14:20.064035    7416 machine.go:93] provisionDockerMachine start ...
	I0908 12:14:20.064310    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:14:22.260419    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:14:22.261047    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:14:22.261047    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:14:24.748557    7416 main.go:141] libmachine: [stdout =====>] : 172.20.50.55
	
	I0908 12:14:24.748647    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:14:24.753846    7416 main.go:141] libmachine: Using SSH client type: native
	I0908 12:14:24.768821    7416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.50.55 22 <nil> <nil>}
	I0908 12:14:24.769050    7416 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 12:14:24.901487    7416 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0908 12:14:24.901487    7416 buildroot.go:166] provisioning hostname "multinode-818700"
	I0908 12:14:24.901623    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:14:26.988785    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:14:26.989196    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:14:26.989262    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:14:29.532943    7416 main.go:141] libmachine: [stdout =====>] : 172.20.50.55
	
	I0908 12:14:29.532943    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:14:29.539791    7416 main.go:141] libmachine: Using SSH client type: native
	I0908 12:14:29.540685    7416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.50.55 22 <nil> <nil>}
	I0908 12:14:29.540685    7416 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-818700 && echo "multinode-818700" | sudo tee /etc/hostname
	I0908 12:14:29.716060    7416 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-818700
	
	I0908 12:14:29.716181    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:14:31.729173    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:14:31.729414    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:14:31.729493    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:14:34.179275    7416 main.go:141] libmachine: [stdout =====>] : 172.20.50.55
	
	I0908 12:14:34.179391    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:14:34.184420    7416 main.go:141] libmachine: Using SSH client type: native
	I0908 12:14:34.185203    7416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.50.55 22 <nil> <nil>}
	I0908 12:14:34.185203    7416 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-818700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-818700/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-818700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 12:14:34.342269    7416 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 12:14:34.342269    7416 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0908 12:14:34.342269    7416 buildroot.go:174] setting up certificates
	I0908 12:14:34.342269    7416 provision.go:84] configureAuth start
	I0908 12:14:34.342269    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:14:36.398568    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:14:36.398846    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:14:36.398959    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:14:38.864074    7416 main.go:141] libmachine: [stdout =====>] : 172.20.50.55
	
	I0908 12:14:38.864614    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:14:38.864614    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:14:40.949906    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:14:40.950283    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:14:40.950283    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:14:43.383215    7416 main.go:141] libmachine: [stdout =====>] : 172.20.50.55
	
	I0908 12:14:43.384245    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:14:43.384302    7416 provision.go:143] copyHostCerts
	I0908 12:14:43.384498    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0908 12:14:43.384820    7416 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0908 12:14:43.384917    7416 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0908 12:14:43.385406    7416 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0908 12:14:43.386687    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0908 12:14:43.386800    7416 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0908 12:14:43.386800    7416 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0908 12:14:43.386800    7416 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0908 12:14:43.388123    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0908 12:14:43.388775    7416 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0908 12:14:43.388775    7416 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0908 12:14:43.389014    7416 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1671 bytes)
	I0908 12:14:43.389853    7416 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-818700 san=[127.0.0.1 172.20.50.55 localhost minikube multinode-818700]
	I0908 12:14:43.726253    7416 provision.go:177] copyRemoteCerts
	I0908 12:14:43.736071    7416 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 12:14:43.736071    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:14:45.742678    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:14:45.742678    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:14:45.743088    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:14:48.123216    7416 main.go:141] libmachine: [stdout =====>] : 172.20.50.55
	
	I0908 12:14:48.123216    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:14:48.124447    7416 sshutil.go:53] new ssh client: &{IP:172.20.50.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700\id_rsa Username:docker}
	I0908 12:14:48.242159    7416 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5060313s)
	I0908 12:14:48.242305    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0908 12:14:48.242446    7416 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 12:14:48.295147    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0908 12:14:48.296412    7416 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0908 12:14:48.347961    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0908 12:14:48.348399    7416 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 12:14:48.400867    7416 provision.go:87] duration metric: took 14.058421s to configureAuth
	I0908 12:14:48.400935    7416 buildroot.go:189] setting minikube options for container-runtime
	I0908 12:14:48.401542    7416 config.go:182] Loaded profile config "multinode-818700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:14:48.401542    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:14:50.437973    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:14:50.438722    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:14:50.438722    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:14:52.865275    7416 main.go:141] libmachine: [stdout =====>] : 172.20.50.55
	
	I0908 12:14:52.865661    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:14:52.871327    7416 main.go:141] libmachine: Using SSH client type: native
	I0908 12:14:52.872186    7416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.50.55 22 <nil> <nil>}
	I0908 12:14:52.872186    7416 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0908 12:14:53.013881    7416 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0908 12:14:53.013962    7416 buildroot.go:70] root file system type: tmpfs
	I0908 12:14:53.014171    7416 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0908 12:14:53.014291    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:14:55.026377    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:14:55.027299    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:14:55.027400    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:14:57.531840    7416 main.go:141] libmachine: [stdout =====>] : 172.20.50.55
	
	I0908 12:14:57.531840    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:14:57.538964    7416 main.go:141] libmachine: Using SSH client type: native
	I0908 12:14:57.540174    7416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.50.55 22 <nil> <nil>}
	I0908 12:14:57.540174    7416 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0908 12:14:57.709107    7416 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0908 12:14:57.709196    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:14:59.739804    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:14:59.740728    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:14:59.740728    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:15:02.212361    7416 main.go:141] libmachine: [stdout =====>] : 172.20.50.55
	
	I0908 12:15:02.213353    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:15:02.218137    7416 main.go:141] libmachine: Using SSH client type: native
	I0908 12:15:02.218848    7416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.50.55 22 <nil> <nil>}
	I0908 12:15:02.218960    7416 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0908 12:15:03.590451    7416 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I0908 12:15:03.590451    7416 machine.go:96] duration metric: took 43.5258674s to provisionDockerMachine
	I0908 12:15:03.590451    7416 client.go:171] duration metric: took 1m52.6593882s to LocalClient.Create
	I0908 12:15:03.590451    7416 start.go:167] duration metric: took 1m52.6593882s to libmachine.API.Create "multinode-818700"
	I0908 12:15:03.590451    7416 start.go:293] postStartSetup for "multinode-818700" (driver="hyperv")
	I0908 12:15:03.590451    7416 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 12:15:03.604923    7416 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 12:15:03.604923    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:15:05.578222    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:15:05.578222    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:15:05.578340    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:15:08.018891    7416 main.go:141] libmachine: [stdout =====>] : 172.20.50.55
	
	I0908 12:15:08.019164    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:15:08.019725    7416 sshutil.go:53] new ssh client: &{IP:172.20.50.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700\id_rsa Username:docker}
	I0908 12:15:08.125645    7416 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.520666s)
	I0908 12:15:08.138553    7416 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 12:15:08.148713    7416 info.go:137] Remote host: Buildroot 2025.02
	I0908 12:15:08.148713    7416 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0908 12:15:08.149762    7416 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0908 12:15:08.150379    7416 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> 116282.pem in /etc/ssl/certs
	I0908 12:15:08.150379    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> /etc/ssl/certs/116282.pem
	I0908 12:15:08.162277    7416 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 12:15:08.183779    7416 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem --> /etc/ssl/certs/116282.pem (1708 bytes)
	I0908 12:15:08.235152    7416 start.go:296] duration metric: took 4.6446428s for postStartSetup
	I0908 12:15:08.238311    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:15:10.297595    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:15:10.297777    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:15:10.297777    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:15:12.768673    7416 main.go:141] libmachine: [stdout =====>] : 172.20.50.55
	
	I0908 12:15:12.769079    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:15:12.769079    7416 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\config.json ...
	I0908 12:15:12.772282    7416 start.go:128] duration metric: took 2m1.8449813s to createHost
	I0908 12:15:12.772402    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:15:14.784559    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:15:14.784559    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:15:14.784738    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:15:17.220631    7416 main.go:141] libmachine: [stdout =====>] : 172.20.50.55
	
	I0908 12:15:17.220631    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:15:17.226599    7416 main.go:141] libmachine: Using SSH client type: native
	I0908 12:15:17.227432    7416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.50.55 22 <nil> <nil>}
	I0908 12:15:17.227432    7416 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0908 12:15:17.363816    7416 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757333717.362766335
	
	I0908 12:15:17.363816    7416 fix.go:216] guest clock: 1757333717.362766335
	I0908 12:15:17.363816    7416 fix.go:229] Guest: 2025-09-08 12:15:17.362766335 +0000 UTC Remote: 2025-09-08 12:15:12.7723282 +0000 UTC m=+127.323635201 (delta=4.590438135s)
	I0908 12:15:17.363816    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:15:19.361660    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:15:19.362036    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:15:19.362036    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:15:21.820173    7416 main.go:141] libmachine: [stdout =====>] : 172.20.50.55
	
	I0908 12:15:21.820173    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:15:21.828843    7416 main.go:141] libmachine: Using SSH client type: native
	I0908 12:15:21.829513    7416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.50.55 22 <nil> <nil>}
	I0908 12:15:21.829513    7416 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1757333717
	I0908 12:15:21.979130    7416 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Sep  8 12:15:17 UTC 2025
	
	I0908 12:15:21.979130    7416 fix.go:236] clock set: Mon Sep  8 12:15:17 UTC 2025
	 (err=<nil>)
	I0908 12:15:21.979130    7416 start.go:83] releasing machines lock for "multinode-818700", held for 2m11.0528844s
	I0908 12:15:21.979130    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:15:24.109580    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:15:24.110215    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:15:24.110368    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:15:26.634075    7416 main.go:141] libmachine: [stdout =====>] : 172.20.50.55
	
	I0908 12:15:26.635056    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:15:26.639453    7416 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0908 12:15:26.639687    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:15:26.649291    7416 ssh_runner.go:195] Run: cat /version.json
	I0908 12:15:26.649291    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:15:28.887969    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:15:28.888043    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:15:28.888093    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:15:28.957853    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:15:28.958039    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:15:28.958114    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:15:31.490901    7416 main.go:141] libmachine: [stdout =====>] : 172.20.50.55
	
	I0908 12:15:31.490901    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:15:31.492113    7416 sshutil.go:53] new ssh client: &{IP:172.20.50.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700\id_rsa Username:docker}
	I0908 12:15:31.546288    7416 main.go:141] libmachine: [stdout =====>] : 172.20.50.55
	
	I0908 12:15:31.546288    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:15:31.547801    7416 sshutil.go:53] new ssh client: &{IP:172.20.50.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700\id_rsa Username:docker}
	I0908 12:15:31.580573    7416 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9409571s)
	W0908 12:15:31.580673    7416 start.go:868] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0908 12:15:31.645949    7416 ssh_runner.go:235] Completed: cat /version.json: (4.9965958s)
	I0908 12:15:31.658797    7416 ssh_runner.go:195] Run: systemctl --version
	I0908 12:15:31.677751    7416 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0908 12:15:31.687200    7416 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0908 12:15:31.697256    7416 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W0908 12:15:31.720048    7416 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0908 12:15:31.720113    7416 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0908 12:15:31.733118    7416 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0908 12:15:31.733178    7416 start.go:495] detecting cgroup driver to use...
	I0908 12:15:31.733384    7416 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 12:15:31.783346    7416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0908 12:15:31.816048    7416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0908 12:15:31.835570    7416 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0908 12:15:31.847286    7416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0908 12:15:31.878151    7416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 12:15:31.910461    7416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0908 12:15:31.942139    7416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 12:15:31.971626    7416 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 12:15:32.003463    7416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0908 12:15:32.034683    7416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0908 12:15:32.072839    7416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0908 12:15:32.105164    7416 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 12:15:32.125447    7416 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0908 12:15:32.136583    7416 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0908 12:15:32.170219    7416 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 12:15:32.199270    7416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:15:32.417034    7416 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0908 12:15:32.471667    7416 start.go:495] detecting cgroup driver to use...
	I0908 12:15:32.483504    7416 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0908 12:15:32.521295    7416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 12:15:32.559103    7416 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 12:15:32.601069    7416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 12:15:32.639630    7416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 12:15:32.677679    7416 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0908 12:15:32.745861    7416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 12:15:32.772395    7416 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 12:15:32.818256    7416 ssh_runner.go:195] Run: which cri-dockerd
	I0908 12:15:32.835644    7416 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0908 12:15:32.854496    7416 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0908 12:15:32.900048    7416 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0908 12:15:33.143299    7416 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0908 12:15:33.393536    7416 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0908 12:15:33.393536    7416 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0908 12:15:33.442611    7416 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0908 12:15:33.481137    7416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:15:33.734693    7416 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0908 12:15:34.420679    7416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 12:15:34.456366    7416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0908 12:15:34.489078    7416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0908 12:15:34.526611    7416 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0908 12:15:34.763955    7416 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0908 12:15:34.993868    7416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:15:35.223361    7416 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0908 12:15:35.283726    7416 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0908 12:15:35.318920    7416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:15:35.549456    7416 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0908 12:15:35.708142    7416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0908 12:15:35.732909    7416 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0908 12:15:35.744822    7416 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0908 12:15:35.754606    7416 start.go:563] Will wait 60s for crictl version
	I0908 12:15:35.767497    7416 ssh_runner.go:195] Run: which crictl
	I0908 12:15:35.786089    7416 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 12:15:35.835905    7416 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0908 12:15:35.845937    7416 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0908 12:15:35.888564    7416 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0908 12:15:35.920139    7416 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0908 12:15:35.920246    7416 ip.go:180] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0908 12:15:35.925486    7416 ip.go:194] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0908 12:15:35.925486    7416 ip.go:194] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0908 12:15:35.925486    7416 ip.go:189] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0908 12:15:35.925486    7416 ip.go:215] Found interface: {Index:17 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4f:5e:c2 Flags:up|broadcast|multicast|running}
	I0908 12:15:35.928588    7416 ip.go:218] interface addr: fe80::a43d:dd17:5b4e:e872/64
	I0908 12:15:35.928588    7416 ip.go:218] interface addr: 172.20.48.1/20
	I0908 12:15:35.937187    7416 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0908 12:15:35.943595    7416 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 12:15:35.966065    7416 kubeadm.go:875] updating cluster {Name:multinode-818700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.34.0 ClusterName:multinode-818700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.50.55 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 12:15:35.966312    7416 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0908 12:15:35.974469    7416 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0908 12:15:35.996223    7416 docker.go:691] Got preloaded images: 
	I0908 12:15:35.996223    7416 docker.go:697] registry.k8s.io/kube-apiserver:v1.34.0 wasn't preloaded
	I0908 12:15:36.006305    7416 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0908 12:15:36.035508    7416 ssh_runner.go:195] Run: which lz4
	I0908 12:15:36.042236    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0908 12:15:36.054828    7416 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0908 12:15:36.062600    7416 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0908 12:15:36.062600    7416 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (353447550 bytes)
	I0908 12:15:38.437290    7416 docker.go:655] duration metric: took 2.3944812s to copy over tarball
	I0908 12:15:38.449173    7416 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0908 12:15:46.525278    7416 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.0759724s)
	I0908 12:15:46.525351    7416 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0908 12:15:46.590531    7416 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0908 12:15:46.609814    7416 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2632 bytes)
	I0908 12:15:46.656934    7416 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0908 12:15:46.691758    7416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:15:46.926678    7416 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0908 12:15:49.009794    7416 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.0830312s)
	I0908 12:15:49.018387    7416 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0908 12:15:49.049152    7416 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0908 12:15:49.049152    7416 cache_images.go:85] Images are preloaded, skipping loading
	I0908 12:15:49.049152    7416 kubeadm.go:926] updating node { 172.20.50.55 8443 v1.34.0 docker true true} ...
	I0908 12:15:49.049766    7416 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-818700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.50.55
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:multinode-818700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 12:15:49.063882    7416 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0908 12:15:49.129724    7416 cni.go:84] Creating CNI manager for ""
	I0908 12:15:49.129805    7416 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0908 12:15:49.129805    7416 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 12:15:49.129891    7416 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.50.55 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-818700 NodeName:multinode-818700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.50.55"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.50.55 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 12:15:49.130174    7416 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.50.55
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-818700"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.20.50.55"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.50.55"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 12:15:49.142019    7416 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 12:15:49.163125    7416 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 12:15:49.173989    7416 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 12:15:49.192993    7416 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0908 12:15:49.226105    7416 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 12:15:49.256982    7416 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I0908 12:15:49.306851    7416 ssh_runner.go:195] Run: grep 172.20.50.55	control-plane.minikube.internal$ /etc/hosts
	I0908 12:15:49.316803    7416 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.50.55	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 12:15:49.349521    7416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:15:49.591485    7416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:15:49.644196    7416 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700 for IP: 172.20.50.55
	I0908 12:15:49.644196    7416 certs.go:194] generating shared ca certs ...
	I0908 12:15:49.644276    7416 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:15:49.644620    7416 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0908 12:15:49.645621    7416 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0908 12:15:49.645848    7416 certs.go:256] generating profile certs ...
	I0908 12:15:49.646585    7416 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\client.key
	I0908 12:15:49.646794    7416 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\client.crt with IP's: []
	I0908 12:15:49.694811    7416 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\client.crt ...
	I0908 12:15:49.694811    7416 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\client.crt: {Name:mk8a450d2f7916cd92c624da7dc5fd53e6e399b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:15:49.696819    7416 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\client.key ...
	I0908 12:15:49.696819    7416 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\client.key: {Name:mka3733ebb1620f48cf705517075a982fcc1bd09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:15:49.697809    7416 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.key.bdde774b
	I0908 12:15:49.697809    7416 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.crt.bdde774b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.50.55]
	I0908 12:15:49.942675    7416 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.crt.bdde774b ...
	I0908 12:15:49.942675    7416 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.crt.bdde774b: {Name:mk76532d615fcc237fbacc568e95e4cfb2169f1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:15:49.944399    7416 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.key.bdde774b ...
	I0908 12:15:49.944399    7416 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.key.bdde774b: {Name:mk1cdb6be48725368259834276a5008579e77ebd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:15:49.944841    7416 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.crt.bdde774b -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.crt
	I0908 12:15:49.964580    7416 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.key.bdde774b -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.key
	I0908 12:15:49.966127    7416 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\proxy-client.key
	I0908 12:15:49.966254    7416 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\proxy-client.crt with IP's: []
	I0908 12:15:50.078378    7416 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\proxy-client.crt ...
	I0908 12:15:50.078378    7416 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\proxy-client.crt: {Name:mkb587af4836c2f7a7e941b8c054700769676269 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:15:50.080397    7416 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\proxy-client.key ...
	I0908 12:15:50.080397    7416 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\proxy-client.key: {Name:mk4d503c8778845f5f20b855bf3e898827db018a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:15:50.080784    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0908 12:15:50.081795    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0908 12:15:50.081795    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0908 12:15:50.081795    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0908 12:15:50.081795    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0908 12:15:50.081795    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0908 12:15:50.081795    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0908 12:15:50.094248    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0908 12:15:50.094637    7416 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem (1338 bytes)
	W0908 12:15:50.095382    7416 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628_empty.pem, impossibly tiny 0 bytes
	I0908 12:15:50.095472    7416 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0908 12:15:50.095932    7416 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0908 12:15:50.096231    7416 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0908 12:15:50.096577    7416 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1671 bytes)
	I0908 12:15:50.097141    7416 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem (1708 bytes)
	I0908 12:15:50.097621    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem -> /usr/share/ca-certificates/11628.pem
	I0908 12:15:50.097963    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> /usr/share/ca-certificates/116282.pem
	I0908 12:15:50.098156    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:15:50.099745    7416 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 12:15:50.151433    7416 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0908 12:15:50.198111    7416 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 12:15:50.245606    7416 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0908 12:15:50.294167    7416 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0908 12:15:50.341986    7416 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0908 12:15:50.393531    7416 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 12:15:50.441519    7416 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 12:15:50.489289    7416 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem --> /usr/share/ca-certificates/11628.pem (1338 bytes)
	I0908 12:15:50.538254    7416 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem --> /usr/share/ca-certificates/116282.pem (1708 bytes)
	I0908 12:15:50.585852    7416 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 12:15:50.636922    7416 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 12:15:50.682686    7416 ssh_runner.go:195] Run: openssl version
	I0908 12:15:50.701123    7416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11628.pem && ln -fs /usr/share/ca-certificates/11628.pem /etc/ssl/certs/11628.pem"
	I0908 12:15:50.734368    7416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11628.pem
	I0908 12:15:50.741300    7416 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 10:54 /usr/share/ca-certificates/11628.pem
	I0908 12:15:50.752005    7416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11628.pem
	I0908 12:15:50.772349    7416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11628.pem /etc/ssl/certs/51391683.0"
	I0908 12:15:50.799943    7416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116282.pem && ln -fs /usr/share/ca-certificates/116282.pem /etc/ssl/certs/116282.pem"
	I0908 12:15:50.827403    7416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116282.pem
	I0908 12:15:50.834823    7416 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 10:54 /usr/share/ca-certificates/116282.pem
	I0908 12:15:50.846284    7416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116282.pem
	I0908 12:15:50.865596    7416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116282.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 12:15:50.895555    7416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 12:15:50.927177    7416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:15:50.934088    7416 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:15:50.946042    7416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:15:50.966576    7416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 12:15:50.995183    7416 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 12:15:51.001728    7416 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0908 12:15:51.002097    7416 kubeadm.go:392] StartCluster: {Name:multinode-818700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
4.0 ClusterName:multinode-818700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.50.55 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:15:51.010615    7416 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0908 12:15:51.045899    7416 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 12:15:51.072152    7416 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 12:15:51.103478    7416 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 12:15:51.122046    7416 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 12:15:51.122046    7416 kubeadm.go:157] found existing configuration files:
	
	I0908 12:15:51.132882    7416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 12:15:51.151825    7416 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 12:15:51.162399    7416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 12:15:51.188019    7416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 12:15:51.204660    7416 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 12:15:51.215227    7416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 12:15:51.241033    7416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 12:15:51.258010    7416 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 12:15:51.266876    7416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 12:15:51.297987    7416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 12:15:51.316833    7416 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 12:15:51.325801    7416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 12:15:51.345561    7416 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0908 12:15:51.551457    7416 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0908 12:16:07.399633    7416 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0908 12:16:07.399633    7416 kubeadm.go:310] [preflight] Running pre-flight checks
	I0908 12:16:07.399633    7416 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0908 12:16:07.399633    7416 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0908 12:16:07.399633    7416 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0908 12:16:07.399633    7416 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0908 12:16:07.403999    7416 out.go:252]   - Generating certificates and keys ...
	I0908 12:16:07.403999    7416 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0908 12:16:07.404710    7416 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0908 12:16:07.404710    7416 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0908 12:16:07.404710    7416 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0908 12:16:07.405276    7416 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0908 12:16:07.405425    7416 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0908 12:16:07.405425    7416 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0908 12:16:07.405753    7416 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-818700] and IPs [172.20.50.55 127.0.0.1 ::1]
	I0908 12:16:07.405940    7416 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0908 12:16:07.406209    7416 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-818700] and IPs [172.20.50.55 127.0.0.1 ::1]
	I0908 12:16:07.406209    7416 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0908 12:16:07.406209    7416 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0908 12:16:07.406209    7416 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0908 12:16:07.406877    7416 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0908 12:16:07.406950    7416 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0908 12:16:07.406950    7416 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0908 12:16:07.406950    7416 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0908 12:16:07.406950    7416 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0908 12:16:07.406950    7416 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0908 12:16:07.407485    7416 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0908 12:16:07.407801    7416 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0908 12:16:07.413010    7416 out.go:252]   - Booting up control plane ...
	I0908 12:16:07.413626    7416 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0908 12:16:07.413626    7416 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0908 12:16:07.413626    7416 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0908 12:16:07.414250    7416 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0908 12:16:07.414250    7416 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0908 12:16:07.414930    7416 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0908 12:16:07.414973    7416 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0908 12:16:07.414973    7416 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0908 12:16:07.414973    7416 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0908 12:16:07.415664    7416 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0908 12:16:07.415664    7416 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501945944s
	I0908 12:16:07.416204    7416 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0908 12:16:07.416331    7416 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://172.20.50.55:8443/livez
	I0908 12:16:07.416331    7416 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0908 12:16:07.416331    7416 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0908 12:16:07.416877    7416 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 4.636712812s
	I0908 12:16:07.417019    7416 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 5.628192733s
	I0908 12:16:07.417019    7416 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 8.501309995s
	I0908 12:16:07.417019    7416 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0908 12:16:07.417582    7416 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0908 12:16:07.417660    7416 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0908 12:16:07.418181    7416 kubeadm.go:310] [mark-control-plane] Marking the node multinode-818700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0908 12:16:07.418307    7416 kubeadm.go:310] [bootstrap-token] Using token: zzkcza.m5fbgg77t7y85z6x
	I0908 12:16:07.421907    7416 out.go:252]   - Configuring RBAC rules ...
	I0908 12:16:07.421907    7416 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0908 12:16:07.422285    7416 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0908 12:16:07.422285    7416 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0908 12:16:07.422953    7416 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0908 12:16:07.423047    7416 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0908 12:16:07.423047    7416 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0908 12:16:07.423568    7416 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0908 12:16:07.423600    7416 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0908 12:16:07.423805    7416 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0908 12:16:07.423805    7416 kubeadm.go:310] 
	I0908 12:16:07.423845    7416 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0908 12:16:07.423845    7416 kubeadm.go:310] 
	I0908 12:16:07.424019    7416 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0908 12:16:07.424019    7416 kubeadm.go:310] 
	I0908 12:16:07.424019    7416 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0908 12:16:07.424019    7416 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0908 12:16:07.424592    7416 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0908 12:16:07.424592    7416 kubeadm.go:310] 
	I0908 12:16:07.424592    7416 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0908 12:16:07.424592    7416 kubeadm.go:310] 
	I0908 12:16:07.424592    7416 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0908 12:16:07.424592    7416 kubeadm.go:310] 
	I0908 12:16:07.424592    7416 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0908 12:16:07.425219    7416 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0908 12:16:07.425494    7416 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0908 12:16:07.425623    7416 kubeadm.go:310] 
	I0908 12:16:07.425727    7416 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0908 12:16:07.425946    7416 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0908 12:16:07.425946    7416 kubeadm.go:310] 
	I0908 12:16:07.425946    7416 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zzkcza.m5fbgg77t7y85z6x \
	I0908 12:16:07.425946    7416 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6f0ed86d1fb618064431da971fb4f5228ff7cd998cb290916759978661fe58e6 \
	I0908 12:16:07.425946    7416 kubeadm.go:310] 	--control-plane 
	I0908 12:16:07.426466    7416 kubeadm.go:310] 
	I0908 12:16:07.426593    7416 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0908 12:16:07.426593    7416 kubeadm.go:310] 
	I0908 12:16:07.426593    7416 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zzkcza.m5fbgg77t7y85z6x \
	I0908 12:16:07.427208    7416 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6f0ed86d1fb618064431da971fb4f5228ff7cd998cb290916759978661fe58e6 
	I0908 12:16:07.427208    7416 cni.go:84] Creating CNI manager for ""
	I0908 12:16:07.427208    7416 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0908 12:16:07.430808    7416 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0908 12:16:07.445710    7416 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0908 12:16:07.454478    7416 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0908 12:16:07.454478    7416 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0908 12:16:07.512131    7416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0908 12:16:07.897865    7416 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 12:16:07.912099    7416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-818700 minikube.k8s.io/updated_at=2025_09_08T12_16_07_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64 minikube.k8s.io/name=multinode-818700 minikube.k8s.io/primary=true
	I0908 12:16:07.914252    7416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 12:16:07.937456    7416 ops.go:34] apiserver oom_adj: -16
	I0908 12:16:08.085710    7416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 12:16:08.583797    7416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 12:16:09.084126    7416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 12:16:09.585188    7416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 12:16:10.083269    7416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 12:16:10.583804    7416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 12:16:11.084921    7416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 12:16:11.584975    7416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 12:16:11.710255    7416 kubeadm.go:1105] duration metric: took 3.8122592s to wait for elevateKubeSystemPrivileges
	I0908 12:16:11.710391    7416 kubeadm.go:394] duration metric: took 20.7080323s to StartCluster
	I0908 12:16:11.710457    7416 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:16:11.710741    7416 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0908 12:16:11.713251    7416 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:16:11.714715    7416 start.go:235] Will wait 6m0s for node &{Name: IP:172.20.50.55 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0908 12:16:11.714715    7416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0908 12:16:11.714715    7416 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 12:16:11.714715    7416 addons.go:69] Setting storage-provisioner=true in profile "multinode-818700"
	I0908 12:16:11.714715    7416 addons.go:238] Setting addon storage-provisioner=true in "multinode-818700"
	I0908 12:16:11.714715    7416 addons.go:69] Setting default-storageclass=true in profile "multinode-818700"
	I0908 12:16:11.715726    7416 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-818700"
	I0908 12:16:11.714715    7416 host.go:66] Checking if "multinode-818700" exists ...
	I0908 12:16:11.715726    7416 config.go:182] Loaded profile config "multinode-818700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:16:11.715726    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:16:11.716713    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:16:11.717787    7416 out.go:179] * Verifying Kubernetes components...
	I0908 12:16:11.743724    7416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:16:12.126129    7416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0908 12:16:12.259212    7416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:16:12.767927    7416 start.go:976] {"host.minikube.internal": 172.20.48.1} host record injected into CoreDNS's ConfigMap
	I0908 12:16:12.769926    7416 kapi.go:59] client config for multinode-818700: &rest.Config{Host:"https://172.20.50.55:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-818700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-818700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2a967c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0908 12:16:12.769926    7416 kapi.go:59] client config for multinode-818700: &rest.Config{Host:"https://172.20.50.55:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-818700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-818700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2a967c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0908 12:16:12.771929    7416 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0908 12:16:12.771929    7416 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0908 12:16:12.771929    7416 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0908 12:16:12.771929    7416 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0908 12:16:12.771929    7416 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0908 12:16:12.771929    7416 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0908 12:16:12.771929    7416 node_ready.go:35] waiting up to 6m0s for node "multinode-818700" to be "Ready" ...
	I0908 12:16:13.287167    7416 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-818700" context rescaled to 1 replicas
	I0908 12:16:14.065783    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:16:14.065783    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:16:14.065783    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:16:14.065783    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:16:14.068930    7416 kapi.go:59] client config for multinode-818700: &rest.Config{Host:"https://172.20.50.55:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-818700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-818700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2a967c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0908 12:16:14.070708    7416 addons.go:238] Setting addon default-storageclass=true in "multinode-818700"
	I0908 12:16:14.070708    7416 host.go:66] Checking if "multinode-818700" exists ...
	I0908 12:16:14.070708    7416 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 12:16:14.070708    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:16:14.075427    7416 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 12:16:14.075427    7416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 12:16:14.076038    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	W0908 12:16:14.787881    7416 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	I0908 12:16:16.521250    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:16:16.521250    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:16:16.521898    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:16:16.521898    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:16:16.521898    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:16:16.521898    7416 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 12:16:16.521898    7416 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 12:16:16.521898    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	W0908 12:16:17.279007    7416 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	I0908 12:16:18.790635    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:16:18.790635    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:16:18.790635    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:16:19.172838    7416 main.go:141] libmachine: [stdout =====>] : 172.20.50.55
	
	I0908 12:16:19.172838    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:16:19.173316    7416 sshutil.go:53] new ssh client: &{IP:172.20.50.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700\id_rsa Username:docker}
	I0908 12:16:19.342013    7416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0908 12:16:20.127422    7416 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	I0908 12:16:20.667967    7416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.3259374s)
	I0908 12:16:21.383507    7416 main.go:141] libmachine: [stdout =====>] : 172.20.50.55
	
	I0908 12:16:21.383926    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:16:21.384511    7416 sshutil.go:53] new ssh client: &{IP:172.20.50.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700\id_rsa Username:docker}
	I0908 12:16:21.523064    7416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 12:16:21.737785    7416 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0908 12:16:21.742498    7416 addons.go:514] duration metric: took 10.0276573s for enable addons: enabled=[storage-provisioner default-storageclass]
	W0908 12:16:22.276659    7416 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:16:24.277466    7416 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:16:26.776661    7416 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:16:28.777382    7416 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:16:31.279814    7416 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:16:33.285353    7416 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:16:35.779231    7416 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	I0908 12:16:36.277129    7416 node_ready.go:49] node "multinode-818700" is "Ready"
	I0908 12:16:36.277129    7416 node_ready.go:38] duration metric: took 23.5049042s for node "multinode-818700" to be "Ready" ...
	I0908 12:16:36.277129    7416 api_server.go:52] waiting for apiserver process to appear ...
	I0908 12:16:36.290135    7416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:16:36.353531    7416 api_server.go:72] duration metric: took 24.6385055s to wait for apiserver process to appear ...
	I0908 12:16:36.353531    7416 api_server.go:88] waiting for apiserver healthz status ...
	I0908 12:16:36.353704    7416 api_server.go:253] Checking apiserver healthz at https://172.20.50.55:8443/healthz ...
	I0908 12:16:36.363987    7416 api_server.go:279] https://172.20.50.55:8443/healthz returned 200:
	ok
	I0908 12:16:36.365210    7416 api_server.go:141] control plane version: v1.34.0
	I0908 12:16:36.365210    7416 api_server.go:131] duration metric: took 11.5066ms to wait for apiserver health ...
	I0908 12:16:36.365210    7416 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 12:16:36.374650    7416 system_pods.go:59] 8 kube-system pods found
	I0908 12:16:36.374743    7416 system_pods.go:61] "coredns-66bc5c9577-svhws" [cd9b9019-0603-4fa5-8b64-d23b1f50d4fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:16:36.374743    7416 system_pods.go:61] "etcd-multinode-818700" [e828f5da-839a-4060-89b0-ba7dc884b7ee] Running
	I0908 12:16:36.374743    7416 system_pods.go:61] "kindnet-5drb9" [7645ef6c-8a22-4f86-9e96-70c0b24ea598] Running
	I0908 12:16:36.374743    7416 system_pods.go:61] "kube-apiserver-multinode-818700" [694377e7-551d-493f-a7e2-23ed065f82df] Running
	I0908 12:16:36.374743    7416 system_pods.go:61] "kube-controller-manager-multinode-818700" [c0ff29cc-9c9b-46a9-a34b-1e3da19a80e2] Running
	I0908 12:16:36.374743    7416 system_pods.go:61] "kube-proxy-m5ksd" [7300c145-be03-4dae-93df-7b201133bc8a] Running
	I0908 12:16:36.374937    7416 system_pods.go:61] "kube-scheduler-multinode-818700" [a805a7f8-5277-4087-9ccf-2f2afcc47715] Running
	I0908 12:16:36.374937    7416 system_pods.go:61] "storage-provisioner" [c5177fef-0793-4291-adac-1b9fa372fa06] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 12:16:36.375035    7416 system_pods.go:74] duration metric: took 9.7823ms to wait for pod list to return data ...
	I0908 12:16:36.375035    7416 default_sa.go:34] waiting for default service account to be created ...
	I0908 12:16:36.382711    7416 default_sa.go:45] found service account: "default"
	I0908 12:16:36.383378    7416 default_sa.go:55] duration metric: took 8.2906ms for default service account to be created ...
	I0908 12:16:36.383378    7416 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 12:16:36.389685    7416 system_pods.go:86] 8 kube-system pods found
	I0908 12:16:36.389685    7416 system_pods.go:89] "coredns-66bc5c9577-svhws" [cd9b9019-0603-4fa5-8b64-d23b1f50d4fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:16:36.389685    7416 system_pods.go:89] "etcd-multinode-818700" [e828f5da-839a-4060-89b0-ba7dc884b7ee] Running
	I0908 12:16:36.389685    7416 system_pods.go:89] "kindnet-5drb9" [7645ef6c-8a22-4f86-9e96-70c0b24ea598] Running
	I0908 12:16:36.389685    7416 system_pods.go:89] "kube-apiserver-multinode-818700" [694377e7-551d-493f-a7e2-23ed065f82df] Running
	I0908 12:16:36.389685    7416 system_pods.go:89] "kube-controller-manager-multinode-818700" [c0ff29cc-9c9b-46a9-a34b-1e3da19a80e2] Running
	I0908 12:16:36.389685    7416 system_pods.go:89] "kube-proxy-m5ksd" [7300c145-be03-4dae-93df-7b201133bc8a] Running
	I0908 12:16:36.389685    7416 system_pods.go:89] "kube-scheduler-multinode-818700" [a805a7f8-5277-4087-9ccf-2f2afcc47715] Running
	I0908 12:16:36.389685    7416 system_pods.go:89] "storage-provisioner" [c5177fef-0793-4291-adac-1b9fa372fa06] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 12:16:36.389685    7416 retry.go:31] will retry after 217.907051ms: missing components: kube-dns
	I0908 12:16:36.619436    7416 system_pods.go:86] 8 kube-system pods found
	I0908 12:16:36.619512    7416 system_pods.go:89] "coredns-66bc5c9577-svhws" [cd9b9019-0603-4fa5-8b64-d23b1f50d4fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:16:36.619512    7416 system_pods.go:89] "etcd-multinode-818700" [e828f5da-839a-4060-89b0-ba7dc884b7ee] Running
	I0908 12:16:36.619512    7416 system_pods.go:89] "kindnet-5drb9" [7645ef6c-8a22-4f86-9e96-70c0b24ea598] Running
	I0908 12:16:36.619512    7416 system_pods.go:89] "kube-apiserver-multinode-818700" [694377e7-551d-493f-a7e2-23ed065f82df] Running
	I0908 12:16:36.619512    7416 system_pods.go:89] "kube-controller-manager-multinode-818700" [c0ff29cc-9c9b-46a9-a34b-1e3da19a80e2] Running
	I0908 12:16:36.619512    7416 system_pods.go:89] "kube-proxy-m5ksd" [7300c145-be03-4dae-93df-7b201133bc8a] Running
	I0908 12:16:36.619601    7416 system_pods.go:89] "kube-scheduler-multinode-818700" [a805a7f8-5277-4087-9ccf-2f2afcc47715] Running
	I0908 12:16:36.619622    7416 system_pods.go:89] "storage-provisioner" [c5177fef-0793-4291-adac-1b9fa372fa06] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 12:16:36.619622    7416 retry.go:31] will retry after 281.013935ms: missing components: kube-dns
	I0908 12:16:36.910776    7416 system_pods.go:86] 8 kube-system pods found
	I0908 12:16:36.910854    7416 system_pods.go:89] "coredns-66bc5c9577-svhws" [cd9b9019-0603-4fa5-8b64-d23b1f50d4fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:16:36.910979    7416 system_pods.go:89] "etcd-multinode-818700" [e828f5da-839a-4060-89b0-ba7dc884b7ee] Running
	I0908 12:16:36.910979    7416 system_pods.go:89] "kindnet-5drb9" [7645ef6c-8a22-4f86-9e96-70c0b24ea598] Running
	I0908 12:16:36.911034    7416 system_pods.go:89] "kube-apiserver-multinode-818700" [694377e7-551d-493f-a7e2-23ed065f82df] Running
	I0908 12:16:36.911054    7416 system_pods.go:89] "kube-controller-manager-multinode-818700" [c0ff29cc-9c9b-46a9-a34b-1e3da19a80e2] Running
	I0908 12:16:36.911054    7416 system_pods.go:89] "kube-proxy-m5ksd" [7300c145-be03-4dae-93df-7b201133bc8a] Running
	I0908 12:16:36.911080    7416 system_pods.go:89] "kube-scheduler-multinode-818700" [a805a7f8-5277-4087-9ccf-2f2afcc47715] Running
	I0908 12:16:36.911080    7416 system_pods.go:89] "storage-provisioner" [c5177fef-0793-4291-adac-1b9fa372fa06] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 12:16:36.911118    7416 retry.go:31] will retry after 378.487242ms: missing components: kube-dns
	I0908 12:16:37.320076    7416 system_pods.go:86] 8 kube-system pods found
	I0908 12:16:37.320610    7416 system_pods.go:89] "coredns-66bc5c9577-svhws" [cd9b9019-0603-4fa5-8b64-d23b1f50d4fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:16:37.320610    7416 system_pods.go:89] "etcd-multinode-818700" [e828f5da-839a-4060-89b0-ba7dc884b7ee] Running
	I0908 12:16:37.320643    7416 system_pods.go:89] "kindnet-5drb9" [7645ef6c-8a22-4f86-9e96-70c0b24ea598] Running
	I0908 12:16:37.320643    7416 system_pods.go:89] "kube-apiserver-multinode-818700" [694377e7-551d-493f-a7e2-23ed065f82df] Running
	I0908 12:16:37.320643    7416 system_pods.go:89] "kube-controller-manager-multinode-818700" [c0ff29cc-9c9b-46a9-a34b-1e3da19a80e2] Running
	I0908 12:16:37.320643    7416 system_pods.go:89] "kube-proxy-m5ksd" [7300c145-be03-4dae-93df-7b201133bc8a] Running
	I0908 12:16:37.320643    7416 system_pods.go:89] "kube-scheduler-multinode-818700" [a805a7f8-5277-4087-9ccf-2f2afcc47715] Running
	I0908 12:16:37.320643    7416 system_pods.go:89] "storage-provisioner" [c5177fef-0793-4291-adac-1b9fa372fa06] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 12:16:37.320643    7416 retry.go:31] will retry after 602.634427ms: missing components: kube-dns
	I0908 12:16:37.937947    7416 system_pods.go:86] 8 kube-system pods found
	I0908 12:16:37.938023    7416 system_pods.go:89] "coredns-66bc5c9577-svhws" [cd9b9019-0603-4fa5-8b64-d23b1f50d4fe] Running
	I0908 12:16:37.938082    7416 system_pods.go:89] "etcd-multinode-818700" [e828f5da-839a-4060-89b0-ba7dc884b7ee] Running
	I0908 12:16:37.938082    7416 system_pods.go:89] "kindnet-5drb9" [7645ef6c-8a22-4f86-9e96-70c0b24ea598] Running
	I0908 12:16:37.938082    7416 system_pods.go:89] "kube-apiserver-multinode-818700" [694377e7-551d-493f-a7e2-23ed065f82df] Running
	I0908 12:16:37.938082    7416 system_pods.go:89] "kube-controller-manager-multinode-818700" [c0ff29cc-9c9b-46a9-a34b-1e3da19a80e2] Running
	I0908 12:16:37.938082    7416 system_pods.go:89] "kube-proxy-m5ksd" [7300c145-be03-4dae-93df-7b201133bc8a] Running
	I0908 12:16:37.938082    7416 system_pods.go:89] "kube-scheduler-multinode-818700" [a805a7f8-5277-4087-9ccf-2f2afcc47715] Running
	I0908 12:16:37.938082    7416 system_pods.go:89] "storage-provisioner" [c5177fef-0793-4291-adac-1b9fa372fa06] Running
	I0908 12:16:37.938082    7416 system_pods.go:126] duration metric: took 1.5546844s to wait for k8s-apps to be running ...
	I0908 12:16:37.938082    7416 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 12:16:37.948691    7416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 12:16:37.985936    7416 system_svc.go:56] duration metric: took 47.7316ms WaitForService to wait for kubelet
	I0908 12:16:37.985936    7416 kubeadm.go:578] duration metric: took 26.2708903s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 12:16:37.986002    7416 node_conditions.go:102] verifying NodePressure condition ...
	I0908 12:16:37.991113    7416 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 12:16:37.991172    7416 node_conditions.go:123] node cpu capacity is 2
	I0908 12:16:37.991232    7416 node_conditions.go:105] duration metric: took 5.1703ms to run NodePressure ...
	I0908 12:16:37.991232    7416 start.go:241] waiting for startup goroutines ...
	I0908 12:16:37.991232    7416 start.go:246] waiting for cluster config update ...
	I0908 12:16:37.991232    7416 start.go:255] writing updated cluster config ...
	I0908 12:16:37.995089    7416 out.go:203] 
	I0908 12:16:37.998865    7416 config.go:182] Loaded profile config "ha-331000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:16:38.010593    7416 config.go:182] Loaded profile config "multinode-818700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:16:38.010593    7416 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\config.json ...
	I0908 12:16:38.017561    7416 out.go:179] * Starting "multinode-818700-m02" worker node in "multinode-818700" cluster
	I0908 12:16:38.020630    7416 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0908 12:16:38.020630    7416 cache.go:58] Caching tarball of preloaded images
	I0908 12:16:38.020630    7416 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0908 12:16:38.020630    7416 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0908 12:16:38.020630    7416 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\config.json ...
	I0908 12:16:38.025543    7416 start.go:360] acquireMachinesLock for multinode-818700-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0908 12:16:38.026571    7416 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-818700-m02"
	I0908 12:16:38.026571    7416 start.go:93] Provisioning new machine with config: &{Name:multinode-818700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.34.0 ClusterName:multinode-818700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.50.55 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0908 12:16:38.026571    7416 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0908 12:16:38.029542    7416 out.go:252] * Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0908 12:16:38.030552    7416 start.go:159] libmachine.API.Create for "multinode-818700" (driver="hyperv")
	I0908 12:16:38.030552    7416 client.go:168] LocalClient.Create starting
	I0908 12:16:38.030552    7416 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0908 12:16:38.030552    7416 main.go:141] libmachine: Decoding PEM data...
	I0908 12:16:38.030552    7416 main.go:141] libmachine: Parsing certificate...
	I0908 12:16:38.030552    7416 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0908 12:16:38.031560    7416 main.go:141] libmachine: Decoding PEM data...
	I0908 12:16:38.031560    7416 main.go:141] libmachine: Parsing certificate...
	I0908 12:16:38.031560    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0908 12:16:39.917008    7416 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0908 12:16:39.917008    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:16:39.917008    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0908 12:16:41.653848    7416 main.go:141] libmachine: [stdout =====>] : False
	
	I0908 12:16:41.653962    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:16:41.654085    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0908 12:16:43.146481    7416 main.go:141] libmachine: [stdout =====>] : True
	
	I0908 12:16:43.146481    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:16:43.146481    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0908 12:16:46.731430    7416 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0908 12:16:46.731430    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:16:46.734862    7416 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso...
	I0908 12:16:47.356893    7416 main.go:141] libmachine: Creating SSH key...
	I0908 12:16:47.731582    7416 main.go:141] libmachine: Creating VM...
	I0908 12:16:47.731582    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0908 12:16:50.631698    7416 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0908 12:16:50.631698    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:16:50.632361    7416 main.go:141] libmachine: Using switch "Default Switch"
	I0908 12:16:50.632493    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0908 12:16:52.379573    7416 main.go:141] libmachine: [stdout =====>] : True
	
	I0908 12:16:52.379573    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:16:52.379573    7416 main.go:141] libmachine: Creating VHD
	I0908 12:16:52.380342    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0908 12:16:56.131965    7416 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : FA1C26E3-3275-4FEA-AC53-C79E422C104D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0908 12:16:56.132222    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:16:56.132222    7416 main.go:141] libmachine: Writing magic tar header
	I0908 12:16:56.132222    7416 main.go:141] libmachine: Writing SSH key tar header
	I0908 12:16:56.145476    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0908 12:16:59.308452    7416 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:16:59.308452    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:16:59.308877    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700-m02\disk.vhd' -SizeBytes 20000MB
	I0908 12:17:01.792303    7416 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:17:01.793339    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:17:01.793477    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-818700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 3072MB
	I0908 12:17:05.448603    7416 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-818700-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0908 12:17:05.448603    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:17:05.448603    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-818700-m02 -DynamicMemoryEnabled $false
	I0908 12:17:07.612798    7416 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:17:07.612798    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:17:07.612798    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-818700-m02 -Count 2
	I0908 12:17:09.741724    7416 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:17:09.742142    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:17:09.742232    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-818700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700-m02\boot2docker.iso'
	I0908 12:17:12.303461    7416 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:17:12.303461    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:17:12.303461    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-818700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700-m02\disk.vhd'
	I0908 12:17:15.003494    7416 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:17:15.004492    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:17:15.004492    7416 main.go:141] libmachine: Starting VM...
	I0908 12:17:15.004492    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-818700-m02
	I0908 12:17:18.170693    7416 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:17:18.170693    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:17:18.170693    7416 main.go:141] libmachine: Waiting for host to start...
	I0908 12:17:18.170693    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:17:20.498709    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:17:20.498709    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:17:20.498783    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:17:23.011409    7416 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:17:23.011486    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:17:24.011599    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:17:26.179514    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:17:26.179514    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:17:26.179945    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:17:28.772683    7416 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:17:28.772683    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:17:29.773123    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:17:31.928757    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:17:31.929849    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:17:31.929849    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:17:34.421309    7416 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:17:34.421770    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:17:35.422361    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:17:37.664407    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:17:37.664407    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:17:37.665122    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:17:40.211265    7416 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:17:40.211812    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:17:41.212597    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:17:43.412394    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:17:43.412394    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:17:43.412394    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:17:46.018537    7416 main.go:141] libmachine: [stdout =====>] : 172.20.62.186
	
	I0908 12:17:46.018537    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:17:46.018537    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:17:48.152348    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:17:48.152348    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:17:48.152348    7416 machine.go:93] provisionDockerMachine start ...
	I0908 12:17:48.152348    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:17:50.289864    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:17:50.289864    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:17:50.290874    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:17:52.754813    7416 main.go:141] libmachine: [stdout =====>] : 172.20.62.186
	
	I0908 12:17:52.755296    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:17:52.760881    7416 main.go:141] libmachine: Using SSH client type: native
	I0908 12:17:52.778379    7416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.62.186 22 <nil> <nil>}
	I0908 12:17:52.778379    7416 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 12:17:52.910227    7416 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0908 12:17:52.910345    7416 buildroot.go:166] provisioning hostname "multinode-818700-m02"
	I0908 12:17:52.910345    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:17:55.006632    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:17:55.006632    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:17:55.007715    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:17:57.557803    7416 main.go:141] libmachine: [stdout =====>] : 172.20.62.186
	
	I0908 12:17:57.558461    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:17:57.563571    7416 main.go:141] libmachine: Using SSH client type: native
	I0908 12:17:57.564115    7416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.62.186 22 <nil> <nil>}
	I0908 12:17:57.564194    7416 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-818700-m02 && echo "multinode-818700-m02" | sudo tee /etc/hostname
	I0908 12:17:57.747525    7416 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-818700-m02
	
	I0908 12:17:57.747525    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:17:59.846383    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:17:59.846383    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:17:59.846383    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:18:02.348293    7416 main.go:141] libmachine: [stdout =====>] : 172.20.62.186
	
	I0908 12:18:02.348811    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:18:02.354632    7416 main.go:141] libmachine: Using SSH client type: native
	I0908 12:18:02.355285    7416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.62.186 22 <nil> <nil>}
	I0908 12:18:02.355285    7416 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-818700-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-818700-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-818700-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 12:18:02.511145    7416 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 12:18:02.511145    7416 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0908 12:18:02.511145    7416 buildroot.go:174] setting up certificates
	I0908 12:18:02.511145    7416 provision.go:84] configureAuth start
	I0908 12:18:02.511145    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:18:04.585956    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:18:04.585956    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:18:04.585956    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:18:07.159196    7416 main.go:141] libmachine: [stdout =====>] : 172.20.62.186
	
	I0908 12:18:07.159196    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:18:07.159196    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:18:09.243727    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:18:09.243727    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:18:09.244175    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:18:11.782870    7416 main.go:141] libmachine: [stdout =====>] : 172.20.62.186
	
	I0908 12:18:11.783755    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:18:11.783844    7416 provision.go:143] copyHostCerts
	I0908 12:18:11.783954    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0908 12:18:11.783954    7416 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0908 12:18:11.783954    7416 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0908 12:18:11.784693    7416 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0908 12:18:11.786256    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0908 12:18:11.786573    7416 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0908 12:18:11.786573    7416 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0908 12:18:11.787040    7416 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0908 12:18:11.787951    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0908 12:18:11.787951    7416 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0908 12:18:11.787951    7416 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0908 12:18:11.788720    7416 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1671 bytes)
	I0908 12:18:11.789482    7416 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-818700-m02 san=[127.0.0.1 172.20.62.186 localhost minikube multinode-818700-m02]
	I0908 12:18:13.051797    7416 provision.go:177] copyRemoteCerts
	I0908 12:18:13.063281    7416 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 12:18:13.063281    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:18:15.153058    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:18:15.153119    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:18:15.153119    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:18:17.688768    7416 main.go:141] libmachine: [stdout =====>] : 172.20.62.186
	
	I0908 12:18:17.689329    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:18:17.690000    7416 sshutil.go:53] new ssh client: &{IP:172.20.62.186 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700-m02\id_rsa Username:docker}
	I0908 12:18:17.800704    7416 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7372702s)
	I0908 12:18:17.800791    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0908 12:18:17.801258    7416 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 12:18:17.858427    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0908 12:18:17.858427    7416 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0908 12:18:17.913931    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0908 12:18:17.914001    7416 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 12:18:17.963708    7416 provision.go:87] duration metric: took 15.4523688s to configureAuth
	I0908 12:18:17.963777    7416 buildroot.go:189] setting minikube options for container-runtime
	I0908 12:18:17.964530    7416 config.go:182] Loaded profile config "multinode-818700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:18:17.964684    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:18:20.017948    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:18:20.018967    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:18:20.019106    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:18:22.530173    7416 main.go:141] libmachine: [stdout =====>] : 172.20.62.186
	
	I0908 12:18:22.530173    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:18:22.536787    7416 main.go:141] libmachine: Using SSH client type: native
	I0908 12:18:22.537349    7416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.62.186 22 <nil> <nil>}
	I0908 12:18:22.537493    7416 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0908 12:18:22.690743    7416 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0908 12:18:22.690743    7416 buildroot.go:70] root file system type: tmpfs
	I0908 12:18:22.690743    7416 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0908 12:18:22.691368    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:18:24.805378    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:18:24.805378    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:18:24.806402    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:18:27.310370    7416 main.go:141] libmachine: [stdout =====>] : 172.20.62.186
	
	I0908 12:18:27.311114    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:18:27.317098    7416 main.go:141] libmachine: Using SSH client type: native
	I0908 12:18:27.317813    7416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.62.186 22 <nil> <nil>}
	I0908 12:18:27.317813    7416 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=172.20.50.55"
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0908 12:18:27.488471    7416 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=172.20.50.55
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0908 12:18:27.489471    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:18:29.664660    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:18:29.664660    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:18:29.665005    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:18:32.164966    7416 main.go:141] libmachine: [stdout =====>] : 172.20.62.186
	
	I0908 12:18:32.164966    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:18:32.169981    7416 main.go:141] libmachine: Using SSH client type: native
	I0908 12:18:32.170681    7416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.62.186 22 <nil> <nil>}
	I0908 12:18:32.170681    7416 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0908 12:18:33.612208    7416 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I0908 12:18:33.612744    7416 machine.go:96] duration metric: took 45.4598226s to provisionDockerMachine
	I0908 12:18:33.612744    7416 client.go:171] duration metric: took 1m55.580735s to LocalClient.Create
	I0908 12:18:33.612858    7416 start.go:167] duration metric: took 1m55.5808495s to libmachine.API.Create "multinode-818700"
	I0908 12:18:33.612858    7416 start.go:293] postStartSetup for "multinode-818700-m02" (driver="hyperv")
	I0908 12:18:33.612858    7416 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 12:18:33.624789    7416 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 12:18:33.624789    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:18:35.708067    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:18:35.708533    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:18:35.708533    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:18:38.274909    7416 main.go:141] libmachine: [stdout =====>] : 172.20.62.186
	
	I0908 12:18:38.274909    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:18:38.275593    7416 sshutil.go:53] new ssh client: &{IP:172.20.62.186 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700-m02\id_rsa Username:docker}
	I0908 12:18:38.403341    7416 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7784916s)
	I0908 12:18:38.413715    7416 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 12:18:38.422745    7416 info.go:137] Remote host: Buildroot 2025.02
	I0908 12:18:38.422926    7416 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0908 12:18:38.423086    7416 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0908 12:18:38.424883    7416 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> 116282.pem in /etc/ssl/certs
	I0908 12:18:38.424883    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> /etc/ssl/certs/116282.pem
	I0908 12:18:38.437557    7416 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 12:18:38.461626    7416 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem --> /etc/ssl/certs/116282.pem (1708 bytes)
	I0908 12:18:38.524308    7416 start.go:296] duration metric: took 4.9112323s for postStartSetup
	I0908 12:18:38.526231    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:18:40.631718    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:18:40.632723    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:18:40.632814    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:18:43.217019    7416 main.go:141] libmachine: [stdout =====>] : 172.20.62.186
	
	I0908 12:18:43.217019    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:18:43.217792    7416 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\config.json ...
	I0908 12:18:43.219905    7416 start.go:128] duration metric: took 2m5.1917574s to createHost
	I0908 12:18:43.219905    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:18:45.309124    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:18:45.309292    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:18:45.309292    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:18:47.940820    7416 main.go:141] libmachine: [stdout =====>] : 172.20.62.186
	
	I0908 12:18:47.940820    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:18:47.947477    7416 main.go:141] libmachine: Using SSH client type: native
	I0908 12:18:47.950874    7416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.62.186 22 <nil> <nil>}
	I0908 12:18:47.950874    7416 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0908 12:18:48.100572    7416 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757333928.120202884
	
	I0908 12:18:48.100572    7416 fix.go:216] guest clock: 1757333928.120202884
	I0908 12:18:48.100572    7416 fix.go:229] Guest: 2025-09-08 12:18:48.120202884 +0000 UTC Remote: 2025-09-08 12:18:43.2199059 +0000 UTC m=+337.768561301 (delta=4.900296984s)
	I0908 12:18:48.100572    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:18:50.289575    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:18:50.289575    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:18:50.289575    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:18:52.777658    7416 main.go:141] libmachine: [stdout =====>] : 172.20.62.186
	
	I0908 12:18:52.777658    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:18:52.784748    7416 main.go:141] libmachine: Using SSH client type: native
	I0908 12:18:52.785404    7416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.62.186 22 <nil> <nil>}
	I0908 12:18:52.785439    7416 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1757333928
	I0908 12:18:52.949455    7416 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Sep  8 12:18:48 UTC 2025
	
	I0908 12:18:52.949455    7416 fix.go:236] clock set: Mon Sep  8 12:18:48 UTC 2025
	 (err=<nil>)
	I0908 12:18:52.949455    7416 start.go:83] releasing machines lock for "multinode-818700-m02", held for 2m14.9211842s
	I0908 12:18:52.949455    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:18:55.127515    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:18:55.127767    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:18:55.127910    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:18:57.662906    7416 main.go:141] libmachine: [stdout =====>] : 172.20.62.186
	
	I0908 12:18:57.663545    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:18:57.666984    7416 out.go:179] * Found network options:
	I0908 12:18:57.670201    7416 out.go:179]   - NO_PROXY=172.20.50.55
	W0908 12:18:57.672991    7416 proxy.go:120] fail to check proxy env: Error ip not in block
	I0908 12:18:57.675510    7416 out.go:179]   - NO_PROXY=172.20.50.55
	W0908 12:18:57.677553    7416 proxy.go:120] fail to check proxy env: Error ip not in block
	W0908 12:18:57.679468    7416 proxy.go:120] fail to check proxy env: Error ip not in block
	I0908 12:18:57.681581    7416 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0908 12:18:57.682118    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:18:57.693700    7416 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 12:18:57.693700    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:18:59.942506    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:18:59.942506    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:18:59.942506    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:18:59.944485    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:18:59.944485    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:18:59.944485    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:19:02.618780    7416 main.go:141] libmachine: [stdout =====>] : 172.20.62.186
	
	I0908 12:19:02.618780    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:19:02.619846    7416 sshutil.go:53] new ssh client: &{IP:172.20.62.186 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700-m02\id_rsa Username:docker}
	I0908 12:19:02.655518    7416 main.go:141] libmachine: [stdout =====>] : 172.20.62.186
	
	I0908 12:19:02.656356    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:19:02.656356    7416 sshutil.go:53] new ssh client: &{IP:172.20.62.186 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700-m02\id_rsa Username:docker}
	I0908 12:19:02.731646    7416 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0500015s)
	W0908 12:19:02.731646    7416 start.go:868] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0908 12:19:02.752991    7416 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0592274s)
	W0908 12:19:02.752991    7416 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0908 12:19:02.765415    7416 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 12:19:02.800151    7416 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0908 12:19:02.800238    7416 start.go:495] detecting cgroup driver to use...
	I0908 12:19:02.800335    7416 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 12:19:02.851349    7416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W0908 12:19:02.852458    7416 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0908 12:19:02.852458    7416 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0908 12:19:02.883023    7416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0908 12:19:02.905809    7416 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0908 12:19:02.918054    7416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0908 12:19:02.951528    7416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 12:19:02.983615    7416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0908 12:19:03.015456    7416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 12:19:03.048059    7416 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 12:19:03.082477    7416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0908 12:19:03.115653    7416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0908 12:19:03.147677    7416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0908 12:19:03.183068    7416 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 12:19:03.203754    7416 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0908 12:19:03.216425    7416 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0908 12:19:03.253226    7416 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 12:19:03.282459    7416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:19:03.503032    7416 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0908 12:19:03.563103    7416 start.go:495] detecting cgroup driver to use...
	I0908 12:19:03.575903    7416 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0908 12:19:03.615137    7416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 12:19:03.665095    7416 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 12:19:03.714856    7416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 12:19:03.752114    7416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 12:19:03.789604    7416 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0908 12:19:03.848895    7416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 12:19:03.874376    7416 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 12:19:03.931968    7416 ssh_runner.go:195] Run: which cri-dockerd
	I0908 12:19:03.949012    7416 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0908 12:19:03.967874    7416 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0908 12:19:04.017889    7416 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0908 12:19:04.274893    7416 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0908 12:19:04.494879    7416 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0908 12:19:04.494879    7416 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0908 12:19:04.546468    7416 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0908 12:19:04.585211    7416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:19:04.818324    7416 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0908 12:19:05.581677    7416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 12:19:05.621633    7416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0908 12:19:05.662268    7416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0908 12:19:05.706658    7416 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0908 12:19:05.965257    7416 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0908 12:19:06.217079    7416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:19:06.470272    7416 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0908 12:19:06.535946    7416 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0908 12:19:06.571589    7416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:19:06.810755    7416 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0908 12:19:06.988526    7416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0908 12:19:07.018861    7416 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0908 12:19:07.032466    7416 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0908 12:19:07.043389    7416 start.go:563] Will wait 60s for crictl version
	I0908 12:19:07.056086    7416 ssh_runner.go:195] Run: which crictl
	I0908 12:19:07.074761    7416 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 12:19:07.138632    7416 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0908 12:19:07.153430    7416 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0908 12:19:07.199763    7416 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0908 12:19:07.248678    7416 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0908 12:19:07.251781    7416 out.go:179]   - env NO_PROXY=172.20.50.55
	I0908 12:19:07.253956    7416 ip.go:180] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0908 12:19:07.258583    7416 ip.go:194] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0908 12:19:07.258583    7416 ip.go:194] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0908 12:19:07.258583    7416 ip.go:189] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0908 12:19:07.258583    7416 ip.go:215] Found interface: {Index:17 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4f:5e:c2 Flags:up|broadcast|multicast|running}
	I0908 12:19:07.263021    7416 ip.go:218] interface addr: fe80::a43d:dd17:5b4e:e872/64
	I0908 12:19:07.263021    7416 ip.go:218] interface addr: 172.20.48.1/20
	I0908 12:19:07.277100    7416 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0908 12:19:07.285046    7416 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 12:19:07.309761    7416 mustload.go:65] Loading cluster: multinode-818700
	I0908 12:19:07.310775    7416 config.go:182] Loaded profile config "multinode-818700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:19:07.311616    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:19:09.452815    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:19:09.452815    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:19:09.452815    7416 host.go:66] Checking if "multinode-818700" exists ...
	I0908 12:19:09.453608    7416 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700 for IP: 172.20.62.186
	I0908 12:19:09.453608    7416 certs.go:194] generating shared ca certs ...
	I0908 12:19:09.453608    7416 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:19:09.454235    7416 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0908 12:19:09.454474    7416 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0908 12:19:09.454474    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0908 12:19:09.454474    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0908 12:19:09.455207    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0908 12:19:09.455389    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0908 12:19:09.455986    7416 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem (1338 bytes)
	W0908 12:19:09.456191    7416 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628_empty.pem, impossibly tiny 0 bytes
	I0908 12:19:09.456191    7416 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0908 12:19:09.456191    7416 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0908 12:19:09.456973    7416 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0908 12:19:09.457238    7416 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1671 bytes)
	I0908 12:19:09.457464    7416 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem (1708 bytes)
	I0908 12:19:09.458040    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem -> /usr/share/ca-certificates/11628.pem
	I0908 12:19:09.458252    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> /usr/share/ca-certificates/116282.pem
	I0908 12:19:09.458557    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:19:09.458873    7416 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 12:19:09.522777    7416 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0908 12:19:09.573263    7416 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 12:19:09.625624    7416 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0908 12:19:09.687800    7416 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem --> /usr/share/ca-certificates/11628.pem (1338 bytes)
	I0908 12:19:09.748516    7416 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem --> /usr/share/ca-certificates/116282.pem (1708 bytes)
	I0908 12:19:09.805840    7416 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 12:19:09.873497    7416 ssh_runner.go:195] Run: openssl version
	I0908 12:19:09.894748    7416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11628.pem && ln -fs /usr/share/ca-certificates/11628.pem /etc/ssl/certs/11628.pem"
	I0908 12:19:09.928052    7416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11628.pem
	I0908 12:19:09.935298    7416 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 10:54 /usr/share/ca-certificates/11628.pem
	I0908 12:19:09.947721    7416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11628.pem
	I0908 12:19:09.969297    7416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11628.pem /etc/ssl/certs/51391683.0"
	I0908 12:19:10.004531    7416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116282.pem && ln -fs /usr/share/ca-certificates/116282.pem /etc/ssl/certs/116282.pem"
	I0908 12:19:10.038900    7416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116282.pem
	I0908 12:19:10.048127    7416 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 10:54 /usr/share/ca-certificates/116282.pem
	I0908 12:19:10.061860    7416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116282.pem
	I0908 12:19:10.084951    7416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116282.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 12:19:10.119021    7416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 12:19:10.154183    7416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:19:10.160766    7416 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:19:10.171692    7416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:19:10.184571    7416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 12:19:10.229784    7416 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 12:19:10.237405    7416 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0908 12:19:10.237635    7416 kubeadm.go:926] updating node {m02 172.20.62.186 8443 v1.34.0 docker false true} ...
	I0908 12:19:10.237855    7416 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-818700-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.62.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:multinode-818700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 12:19:10.249321    7416 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 12:19:10.275056    7416 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.0': No such file or directory
	
	Initiating transfer...
	I0908 12:19:10.288376    7416 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.0
	I0908 12:19:10.309660    7416 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubelet.sha256
	I0908 12:19:10.309660    7416 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
	I0908 12:19:10.309736    7416 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubeadm.sha256
	I0908 12:19:10.309736    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubectl -> /var/lib/minikube/binaries/v1.34.0/kubectl
	I0908 12:19:10.309736    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubeadm -> /var/lib/minikube/binaries/v1.34.0/kubeadm
	I0908 12:19:10.324013    7416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 12:19:10.324013    7416 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubeadm
	I0908 12:19:10.325093    7416 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubectl
	I0908 12:19:10.352757    7416 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0/kubeadm': No such file or directory
	I0908 12:19:10.352809    7416 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0/kubectl': No such file or directory
	I0908 12:19:10.352757    7416 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubelet -> /var/lib/minikube/binaries/v1.34.0/kubelet
	I0908 12:19:10.353048    7416 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubectl --> /var/lib/minikube/binaries/v1.34.0/kubectl (60559544 bytes)
	I0908 12:19:10.353119    7416 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubeadm --> /var/lib/minikube/binaries/v1.34.0/kubeadm (74027192 bytes)
	I0908 12:19:10.371035    7416 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubelet
	I0908 12:19:10.458516    7416 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0/kubelet': No such file or directory
	I0908 12:19:10.458850    7416 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubelet --> /var/lib/minikube/binaries/v1.34.0/kubelet (59195684 bytes)
	I0908 12:19:11.766359    7416 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0908 12:19:11.790825    7416 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0908 12:19:11.825871    7416 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 12:19:11.881895    7416 ssh_runner.go:195] Run: grep 172.20.50.55	control-plane.minikube.internal$ /etc/hosts
	I0908 12:19:11.888868    7416 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.50.55	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 12:19:11.927935    7416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:19:12.179196    7416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:19:12.244685    7416 host.go:66] Checking if "multinode-818700" exists ...
	I0908 12:19:12.245787    7416 start.go:317] joinCluster: &{Name:multinode-818700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
0 ClusterName:multinode-818700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.50.55 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.62.186 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExp
iration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:19:12.245954    7416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0908 12:19:12.246056    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:19:14.464200    7416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:19:14.464576    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:19:14.464667    7416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:19:16.988871    7416 main.go:141] libmachine: [stdout =====>] : 172.20.50.55
	
	I0908 12:19:16.988871    7416 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:19:16.990661    7416 sshutil.go:53] new ssh client: &{IP:172.20.50.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700\id_rsa Username:docker}
	I0908 12:19:17.412817    7416 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0": (5.1666125s)
	I0908 12:19:17.412943    7416 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.20.62.186 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0908 12:19:17.413023    7416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mcw1vt.080vftweyjfja2mt --discovery-token-ca-cert-hash sha256:6f0ed86d1fb618064431da971fb4f5228ff7cd998cb290916759978661fe58e6 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-818700-m02"
	I0908 12:19:20.015291    7416 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mcw1vt.080vftweyjfja2mt --discovery-token-ca-cert-hash sha256:6f0ed86d1fb618064431da971fb4f5228ff7cd998cb290916759978661fe58e6 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-818700-m02": (2.6022055s)
	I0908 12:19:20.015291    7416 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0908 12:19:20.512429    7416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-818700-m02 minikube.k8s.io/updated_at=2025_09_08T12_19_20_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64 minikube.k8s.io/name=multinode-818700 minikube.k8s.io/primary=false
	I0908 12:19:20.645148    7416 start.go:319] duration metric: took 8.3992549s to joinCluster
	I0908 12:19:20.645148    7416 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.20.62.186 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0908 12:19:20.646187    7416 config.go:182] Loaded profile config "multinode-818700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:19:20.648146    7416 out.go:179] * Verifying Kubernetes components...
	I0908 12:19:20.665149    7416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:19:20.915751    7416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:19:20.947580    7416 kapi.go:59] client config for multinode-818700: &rest.Config{Host:"https://172.20.50.55:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-818700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-818700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2a967c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0908 12:19:20.948979    7416 node_ready.go:35] waiting up to 6m0s for node "multinode-818700-m02" to be "Ready" ...
	W0908 12:19:22.954912    7416 node_ready.go:57] node "multinode-818700-m02" has "Ready":"False" status (will retry)
	W0908 12:19:25.452780    7416 node_ready.go:57] node "multinode-818700-m02" has "Ready":"False" status (will retry)
	W0908 12:19:27.454533    7416 node_ready.go:57] node "multinode-818700-m02" has "Ready":"False" status (will retry)
	W0908 12:19:29.697195    7416 node_ready.go:57] node "multinode-818700-m02" has "Ready":"False" status (will retry)
	W0908 12:19:32.018390    7416 node_ready.go:57] node "multinode-818700-m02" has "Ready":"False" status (will retry)
	W0908 12:19:34.454346    7416 node_ready.go:57] node "multinode-818700-m02" has "Ready":"False" status (will retry)
	W0908 12:19:36.954063    7416 node_ready.go:57] node "multinode-818700-m02" has "Ready":"False" status (will retry)
	W0908 12:19:38.954670    7416 node_ready.go:57] node "multinode-818700-m02" has "Ready":"False" status (will retry)
	W0908 12:19:40.965637    7416 node_ready.go:57] node "multinode-818700-m02" has "Ready":"False" status (will retry)
	W0908 12:19:43.454447    7416 node_ready.go:57] node "multinode-818700-m02" has "Ready":"False" status (will retry)
	W0908 12:19:45.955664    7416 node_ready.go:57] node "multinode-818700-m02" has "Ready":"False" status (will retry)
	W0908 12:19:48.454955    7416 node_ready.go:57] node "multinode-818700-m02" has "Ready":"False" status (will retry)
	W0908 12:19:50.954611    7416 node_ready.go:57] node "multinode-818700-m02" has "Ready":"False" status (will retry)
	I0908 12:19:52.453931    7416 node_ready.go:49] node "multinode-818700-m02" is "Ready"
	I0908 12:19:52.453931    7416 node_ready.go:38] duration metric: took 31.5045548s for node "multinode-818700-m02" to be "Ready" ...
	I0908 12:19:52.453931    7416 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 12:19:52.465163    7416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 12:19:52.491900    7416 system_svc.go:56] duration metric: took 37.9682ms WaitForService to wait for kubelet
	I0908 12:19:52.491900    7416 kubeadm.go:578] duration metric: took 31.8463504s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 12:19:52.491900    7416 node_conditions.go:102] verifying NodePressure condition ...
	I0908 12:19:52.495614    7416 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 12:19:52.495614    7416 node_conditions.go:123] node cpu capacity is 2
	I0908 12:19:52.495614    7416 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 12:19:52.495614    7416 node_conditions.go:123] node cpu capacity is 2
	I0908 12:19:52.495614    7416 node_conditions.go:105] duration metric: took 3.7142ms to run NodePressure ...
	I0908 12:19:52.495614    7416 start.go:241] waiting for startup goroutines ...
	I0908 12:19:52.495614    7416 start.go:255] writing updated cluster config ...
	I0908 12:19:52.509162    7416 ssh_runner.go:195] Run: rm -f paused
	I0908 12:19:52.517772    7416 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 12:19:52.519218    7416 kapi.go:59] client config for multinode-818700: &rest.Config{Host:"https://172.20.50.55:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-818700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-818700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2a967c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0908 12:19:52.524416    7416 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-svhws" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:19:52.530680    7416 pod_ready.go:94] pod "coredns-66bc5c9577-svhws" is "Ready"
	I0908 12:19:52.530680    7416 pod_ready.go:86] duration metric: took 6.1556ms for pod "coredns-66bc5c9577-svhws" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:19:52.535129    7416 pod_ready.go:83] waiting for pod "etcd-multinode-818700" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:19:52.541903    7416 pod_ready.go:94] pod "etcd-multinode-818700" is "Ready"
	I0908 12:19:52.541903    7416 pod_ready.go:86] duration metric: took 6.774ms for pod "etcd-multinode-818700" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:19:52.545424    7416 pod_ready.go:83] waiting for pod "kube-apiserver-multinode-818700" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:19:52.552564    7416 pod_ready.go:94] pod "kube-apiserver-multinode-818700" is "Ready"
	I0908 12:19:52.552564    7416 pod_ready.go:86] duration metric: took 7.1397ms for pod "kube-apiserver-multinode-818700" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:19:52.556046    7416 pod_ready.go:83] waiting for pod "kube-controller-manager-multinode-818700" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:19:52.719951    7416 request.go:683] "Waited before sending request" delay="163.9025ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.50.55:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-818700"
	I0908 12:19:52.920073    7416 request.go:683] "Waited before sending request" delay="194.5491ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.50.55:8443/api/v1/nodes/multinode-818700"
	I0908 12:19:52.925671    7416 pod_ready.go:94] pod "kube-controller-manager-multinode-818700" is "Ready"
	I0908 12:19:52.925671    7416 pod_ready.go:86] duration metric: took 369.6199ms for pod "kube-controller-manager-multinode-818700" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:19:53.120167    7416 request.go:683] "Waited before sending request" delay="194.3056ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.50.55:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0908 12:19:53.125722    7416 pod_ready.go:83] waiting for pod "kube-proxy-m5ksd" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:19:53.320468    7416 request.go:683] "Waited before sending request" delay="194.6282ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.50.55:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m5ksd"
	I0908 12:19:53.520285    7416 request.go:683] "Waited before sending request" delay="194.6454ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.50.55:8443/api/v1/nodes/multinode-818700"
	I0908 12:19:53.524715    7416 pod_ready.go:94] pod "kube-proxy-m5ksd" is "Ready"
	I0908 12:19:53.524715    7416 pod_ready.go:86] duration metric: took 398.9158ms for pod "kube-proxy-m5ksd" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:19:53.524715    7416 pod_ready.go:83] waiting for pod "kube-proxy-m9smd" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:19:53.720627    7416 request.go:683] "Waited before sending request" delay="195.3669ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.50.55:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m9smd"
	I0908 12:19:53.920093    7416 request.go:683] "Waited before sending request" delay="190.7331ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.50.55:8443/api/v1/nodes/multinode-818700-m02"
	I0908 12:19:53.924044    7416 pod_ready.go:94] pod "kube-proxy-m9smd" is "Ready"
	I0908 12:19:53.924044    7416 pod_ready.go:86] duration metric: took 399.3242ms for pod "kube-proxy-m9smd" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:19:54.121234    7416 request.go:683] "Waited before sending request" delay="197.1874ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.50.55:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I0908 12:19:54.127671    7416 pod_ready.go:83] waiting for pod "kube-scheduler-multinode-818700" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:19:54.320591    7416 request.go:683] "Waited before sending request" delay="192.6924ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.50.55:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-818700"
	I0908 12:19:54.520426    7416 request.go:683] "Waited before sending request" delay="194.5954ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.20.50.55:8443/api/v1/nodes/multinode-818700"
	I0908 12:19:54.525370    7416 pod_ready.go:94] pod "kube-scheduler-multinode-818700" is "Ready"
	I0908 12:19:54.525615    7416 pod_ready.go:86] duration metric: took 397.7985ms for pod "kube-scheduler-multinode-818700" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:19:54.525615    7416 pod_ready.go:40] duration metric: took 2.0077568s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 12:19:54.668843    7416 start.go:617] kubectl: 1.34.0, cluster: 1.34.0 (minor skew: 0)
	I0908 12:19:54.672806    7416 out.go:179] * Done! kubectl is now configured to use "multinode-818700" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 08 12:15:47 multinode-818700 dockerd[1774]: time="2025-09-08T12:15:47.089162880Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 08 12:15:48 multinode-818700 dockerd[1774]: time="2025-09-08T12:15:48.554344557Z" level=info msg="Loading containers: start."
	Sep 08 12:15:48 multinode-818700 dockerd[1774]: time="2025-09-08T12:15:48.748977463Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 08 12:15:48 multinode-818700 dockerd[1774]: time="2025-09-08T12:15:48.880428903Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint_count 1b092261e2be1f5853aae20d29ce0549b86a2c8d9eba256fd64c6471457bfc76], retrying...."
	Sep 08 12:15:48 multinode-818700 dockerd[1774]: time="2025-09-08T12:15:48.969384835Z" level=info msg="Loading containers: done."
	Sep 08 12:15:48 multinode-818700 dockerd[1774]: time="2025-09-08T12:15:48.990452624Z" level=info msg="Docker daemon" commit=249d679 containerd-snapshotter=false storage-driver=overlay2 version=28.4.0
	Sep 08 12:15:48 multinode-818700 dockerd[1774]: time="2025-09-08T12:15:48.990551932Z" level=info msg="Initializing buildkit"
	Sep 08 12:15:49 multinode-818700 dockerd[1774]: time="2025-09-08T12:15:49.016579219Z" level=info msg="Completed buildkit initialization"
	Sep 08 12:15:49 multinode-818700 dockerd[1774]: time="2025-09-08T12:15:49.027666708Z" level=info msg="Daemon has completed initialization"
	Sep 08 12:15:49 multinode-818700 dockerd[1774]: time="2025-09-08T12:15:49.027770617Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 08 12:15:49 multinode-818700 dockerd[1774]: time="2025-09-08T12:15:49.027993634Z" level=info msg="API listen on [::]:2376"
	Sep 08 12:15:49 multinode-818700 dockerd[1774]: time="2025-09-08T12:15:49.028070941Z" level=info msg="API listen on /run/docker.sock"
	Sep 08 12:15:49 multinode-818700 systemd[1]: Started Docker Application Container Engine.
	Sep 08 12:15:58 multinode-818700 cri-dockerd[1639]: time="2025-09-08T12:15:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1b891663f1f6247dbc100dcd2b0d9bf03fa14d155d258c55de81f52faa961d85/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 12:15:58 multinode-818700 cri-dockerd[1639]: time="2025-09-08T12:15:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/62d71a5295d0d312afe6611d143dced8441b86554e05b4d7018751ef12d58165/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 12:15:58 multinode-818700 cri-dockerd[1639]: time="2025-09-08T12:15:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a817a17208da831dcff03c287fc940fd9ed0f584e42972ae35ab38941193e80d/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 12:15:58 multinode-818700 cri-dockerd[1639]: time="2025-09-08T12:15:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e229cb205b5d08737737cbd85bd7caa5c54be17bc1658b591d9bc74b8e07613c/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 12:16:11 multinode-818700 cri-dockerd[1639]: time="2025-09-08T12:16:11Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 08 12:16:13 multinode-818700 cri-dockerd[1639]: time="2025-09-08T12:16:13Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7e1c24e28ed9fb50c52106725cc7d6b9bd5435d78ad5ed25096d4c11698beec2/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 12:16:13 multinode-818700 cri-dockerd[1639]: time="2025-09-08T12:16:13Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cf6168c36a0f93e0616e1d1e7e313953e9aed0453422ece66c00c95b8edee4bf/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 12:16:19 multinode-818700 cri-dockerd[1639]: time="2025-09-08T12:16:19Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20250512-df8de77b: Status: Downloaded newer image for kindest/kindnetd:v20250512-df8de77b"
	Sep 08 12:16:36 multinode-818700 cri-dockerd[1639]: time="2025-09-08T12:16:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/da2be864ea38e76c0f6a99cd466b48d160ca9c84f8fcbfab2dcb59e65cd1c26d/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 12:16:36 multinode-818700 cri-dockerd[1639]: time="2025-09-08T12:16:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/21dbed80ecd5d278aeb46331df8b2fad7927080af3440fd3a0382fa936530e06/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 12:20:19 multinode-818700 cri-dockerd[1639]: time="2025-09-08T12:20:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ef2fb474e24471810da79ad574291ce8912dfdfa10973245f1c26d500bef6092/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 08 12:20:21 multinode-818700 cri-dockerd[1639]: time="2025-09-08T12:20:21Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b1bc7b0f492c1       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   49 seconds ago      Running             busybox                   0                   ef2fb474e2447       busybox-7b57f96db7-ztvwm
	4b397652bed65       52546a367cc9e                                                                                         4 minutes ago       Running             coredns                   0                   da2be864ea38e       coredns-66bc5c9577-svhws
	51939f01ba778       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   21dbed80ecd5d       storage-provisioner
	0e97a2b4abd9c       kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a              4 minutes ago       Running             kindnet-cni               0                   cf6168c36a0f9       kindnet-5drb9
	a793eb6b8d638       df0860106674d                                                                                         4 minutes ago       Running             kube-proxy                0                   7e1c24e28ed9f       kube-proxy-m5ksd
	4ef5a92069c26       a0af72f2ec6d6                                                                                         5 minutes ago       Running             kube-controller-manager   0                   e229cb205b5d0       kube-controller-manager-multinode-818700
	07ac3a29d9318       46169d968e920                                                                                         5 minutes ago       Running             kube-scheduler            0                   a817a17208da8       kube-scheduler-multinode-818700
	3ae48749732c0       90550c43ad2bc                                                                                         5 minutes ago       Running             kube-apiserver            0                   62d71a5295d0d       kube-apiserver-multinode-818700
	19b41e0f8bcfe       5f1f5298c888d                                                                                         5 minutes ago       Running             etcd                      0                   1b891663f1f62       etcd-multinode-818700
	
	
	==> coredns [4b397652bed6] <==
	[INFO] 10.244.1.2:57039 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101002s
	[INFO] 10.244.0.3:32828 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000217705s
	[INFO] 10.244.0.3:36873 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000178403s
	[INFO] 10.244.0.3:52041 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000272205s
	[INFO] 10.244.0.3:55840 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000247405s
	[INFO] 10.244.0.3:35598 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000075002s
	[INFO] 10.244.0.3:43403 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000162403s
	[INFO] 10.244.0.3:33397 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000199704s
	[INFO] 10.244.0.3:49318 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124702s
	[INFO] 10.244.1.2:35974 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000267706s
	[INFO] 10.244.1.2:49846 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000223804s
	[INFO] 10.244.1.2:58033 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000146203s
	[INFO] 10.244.1.2:51546 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000144103s
	[INFO] 10.244.0.3:37430 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114603s
	[INFO] 10.244.0.3:54627 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000342107s
	[INFO] 10.244.0.3:39321 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000345607s
	[INFO] 10.244.0.3:51976 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000306507s
	[INFO] 10.244.1.2:59187 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168503s
	[INFO] 10.244.1.2:56345 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000162604s
	[INFO] 10.244.1.2:46830 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000160803s
	[INFO] 10.244.1.2:38005 - 5 "PTR IN 1.48.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000196004s
	[INFO] 10.244.0.3:60239 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178904s
	[INFO] 10.244.0.3:42627 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000233606s
	[INFO] 10.244.0.3:36186 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000076402s
	[INFO] 10.244.0.3:36334 - 5 "PTR IN 1.48.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000132303s
	
	
	==> describe nodes <==
	Name:               multinode-818700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-818700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64
	                    minikube.k8s.io/name=multinode-818700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T12_16_07_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 12:16:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-818700
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 12:21:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 12:20:41 +0000   Mon, 08 Sep 2025 12:16:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 12:20:41 +0000   Mon, 08 Sep 2025 12:16:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 12:20:41 +0000   Mon, 08 Sep 2025 12:16:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 12:20:41 +0000   Mon, 08 Sep 2025 12:16:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.50.55
	  Hostname:    multinode-818700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	System Info:
	  Machine ID:                 a86c863d9f744992bba16c2b3fa70829
	  System UUID:                aa27505c-10ba-8642-a967-ec436ee1d0a0
	  Boot ID:                    b75434ca-65b4-4bc1-ba6c-75e19e68e950
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-ztvwm                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 coredns-66bc5c9577-svhws                    100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     4m59s
	  kube-system                 etcd-multinode-818700                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         5m5s
	  kube-system                 kindnet-5drb9                               100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      5m
	  kube-system                 kube-apiserver-multinode-818700             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 kube-controller-manager-multinode-818700    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-proxy-m5ksd                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 kube-scheduler-multinode-818700             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (7%)  220Mi (7%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m57s                  kube-proxy       
	  Normal  Starting                 5m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m14s (x8 over 5m14s)  kubelet          Node multinode-818700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m14s (x8 over 5m14s)  kubelet          Node multinode-818700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m14s (x7 over 5m14s)  kubelet          Node multinode-818700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m5s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m4s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m4s                   kubelet          Node multinode-818700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m4s                   kubelet          Node multinode-818700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m4s                   kubelet          Node multinode-818700 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m                     node-controller  Node multinode-818700 event: Registered Node multinode-818700 in Controller
	  Normal  NodeReady                4m36s                  kubelet          Node multinode-818700 status is now: NodeReady
	
	
	Name:               multinode-818700-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-818700-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64
	                    minikube.k8s.io/name=multinode-818700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_08T12_19_20_0700
	                    minikube.k8s.io/version=v1.36.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 12:19:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-818700-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 12:21:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 12:20:51 +0000   Mon, 08 Sep 2025 12:19:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 12:20:51 +0000   Mon, 08 Sep 2025 12:19:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 12:20:51 +0000   Mon, 08 Sep 2025 12:19:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 12:20:51 +0000   Mon, 08 Sep 2025 12:19:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.62.186
	  Hostname:    multinode-818700-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976484Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976484Ki
	  pods:               110
	System Info:
	  Machine ID:                 3295ff67a04d4a15823f17f0c1453bd5
	  System UUID:                ac897804-3d21-a64e-960d-5d53bcb60fdc
	  Boot ID:                    7c864719-b6ad-4966-841e-c0feb0da713e
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-ndqg5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 kindnet-chkc2               100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      112s
	  kube-system                 kube-proxy-m9smd            0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (1%)  50Mi (1%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 99s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  112s (x3 over 112s)  kubelet          Node multinode-818700-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s (x3 over 112s)  kubelet          Node multinode-818700-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s (x3 over 112s)  kubelet          Node multinode-818700-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  112s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     111s                 cidrAllocator    Node multinode-818700-m02 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           110s                 node-controller  Node multinode-818700-m02 event: Registered Node multinode-818700-m02 in Controller
	  Normal  NodeReady                79s                  kubelet          Node multinode-818700-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep 8 12:14] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000000] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +0.002400] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.000008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.002572] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	              * this clock source is slow. Consider trying other clock sources
	[  +0.690647] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +0.000052] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.023092] (rpcbind)[115]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.529154] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep 8 12:15] kauditd_printk_skb: 96 callbacks suppressed
	[  +0.194847] kauditd_printk_skb: 237 callbacks suppressed
	[  +0.136921] kauditd_printk_skb: 193 callbacks suppressed
	[Sep 8 12:16] kauditd_printk_skb: 159 callbacks suppressed
	[  +0.651367] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.178982] kauditd_printk_skb: 129 callbacks suppressed
	[ +15.805927] kauditd_printk_skb: 17 callbacks suppressed
	[Sep 8 12:20] kauditd_printk_skb: 56 callbacks suppressed
	[ +31.336589] hrtimer: interrupt took 2715778 ns
	
	
	==> etcd [19b41e0f8bcf] <==
	{"level":"warn","ts":"2025-09-08T12:16:01.999904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:16:02.022223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:16:02.048203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:16:02.066847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:16:02.087737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:16:02.102687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:16:02.250690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:16:20.147195Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"349.255351ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-818700\" limit:1 ","response":"range_response_count:1 size:4392"}
	{"level":"info","ts":"2025-09-08T12:16:20.147298Z","caller":"traceutil/trace.go:172","msg":"trace[2133081837] range","detail":"{range_begin:/registry/minions/multinode-818700; range_end:; response_count:1; response_revision:423; }","duration":"349.358859ms","start":"2025-09-08T12:16:19.797925Z","end":"2025-09-08T12:16:20.147284Z","steps":["trace[2133081837] 'range keys from in-memory index tree'  (duration: 349.139844ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T12:16:20.147335Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-08T12:16:19.797906Z","time spent":"349.419964ms","remote":"127.0.0.1:58864","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":4415,"request content":"key:\"/registry/minions/multinode-818700\" limit:1 "}
	{"level":"warn","ts":"2025-09-08T12:16:20.147175Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"345.948618ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T12:16:20.147494Z","caller":"traceutil/trace.go:172","msg":"trace[2136089272] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:423; }","duration":"346.283642ms","start":"2025-09-08T12:16:19.801199Z","end":"2025-09-08T12:16:20.147482Z","steps":["trace[2136089272] 'range keys from in-memory index tree'  (duration: 345.870312ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T12:16:20.147522Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-08T12:16:19.801190Z","time spent":"346.321645ms","remote":"127.0.0.1:58920","response type":"/etcdserverpb.KV/Range","request count":0,"request size":61,"response count":0,"response size":28,"request content":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" limit:1 "}
	{"level":"info","ts":"2025-09-08T12:16:20.386185Z","caller":"traceutil/trace.go:172","msg":"trace[1502131073] transaction","detail":"{read_only:false; response_revision:425; number_of_response:1; }","duration":"210.747909ms","start":"2025-09-08T12:16:20.175372Z","end":"2025-09-08T12:16:20.386120Z","steps":["trace[1502131073] 'process raft request'  (duration: 164.367306ms)","trace[1502131073] 'compare'  (duration: 45.35513ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-08T12:16:20.570205Z","caller":"traceutil/trace.go:172","msg":"trace[2069696361] transaction","detail":"{read_only:false; response_revision:426; number_of_response:1; }","duration":"170.31383ms","start":"2025-09-08T12:16:20.399875Z","end":"2025-09-08T12:16:20.570189Z","steps":["trace[2069696361] 'process raft request'  (duration: 170.224523ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T12:16:49.597234Z","caller":"traceutil/trace.go:172","msg":"trace[48525353] transaction","detail":"{read_only:false; response_revision:475; number_of_response:1; }","duration":"133.728138ms","start":"2025-09-08T12:16:49.463488Z","end":"2025-09-08T12:16:49.597216Z","steps":["trace[48525353] 'process raft request'  (duration: 133.50682ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T12:19:13.142698Z","caller":"traceutil/trace.go:172","msg":"trace[246161771] transaction","detail":"{read_only:false; response_revision:588; number_of_response:1; }","duration":"189.919931ms","start":"2025-09-08T12:19:12.952694Z","end":"2025-09-08T12:19:13.142614Z","steps":["trace[246161771] 'process raft request'  (duration: 189.727739ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T12:19:17.355399Z","caller":"traceutil/trace.go:172","msg":"trace[1846632703] linearizableReadLoop","detail":"{readStateIndex:639; appliedIndex:639; }","duration":"142.39769ms","start":"2025-09-08T12:19:17.212983Z","end":"2025-09-08T12:19:17.355381Z","steps":["trace[1846632703] 'read index received'  (duration: 142.39239ms)","trace[1846632703] 'applied index is now lower than readState.Index'  (duration: 4.3µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-08T12:19:17.355881Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"142.603182ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/kube-system/bootstrap-token-mcw1vt\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T12:19:17.355921Z","caller":"traceutil/trace.go:172","msg":"trace[1275805249] range","detail":"{range_begin:/registry/secrets/kube-system/bootstrap-token-mcw1vt; range_end:; response_count:0; response_revision:590; }","duration":"142.93967ms","start":"2025-09-08T12:19:17.212973Z","end":"2025-09-08T12:19:17.355913Z","steps":["trace[1275805249] 'agreement among raft nodes before linearized reading'  (duration: 142.523985ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T12:19:17.356736Z","caller":"traceutil/trace.go:172","msg":"trace[1418397520] transaction","detail":"{read_only:false; response_revision:591; number_of_response:1; }","duration":"175.951562ms","start":"2025-09-08T12:19:17.180774Z","end":"2025-09-08T12:19:17.356726Z","steps":["trace[1418397520] 'process raft request'  (duration: 174.721007ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T12:19:29.720267Z","caller":"traceutil/trace.go:172","msg":"trace[2090343497] linearizableReadLoop","detail":"{readStateIndex:693; appliedIndex:693; }","duration":"241.380198ms","start":"2025-09-08T12:19:29.478838Z","end":"2025-09-08T12:19:29.720218Z","steps":["trace[2090343497] 'read index received'  (duration: 241.373998ms)","trace[2090343497] 'applied index is now lower than readState.Index'  (duration: 5.1µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-08T12:19:29.720490Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"241.635793ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-818700-m02\" limit:1 ","response":"range_response_count:1 size:2685"}
	{"level":"info","ts":"2025-09-08T12:19:29.720515Z","caller":"traceutil/trace.go:172","msg":"trace[1064292341] range","detail":"{range_begin:/registry/minions/multinode-818700-m02; range_end:; response_count:1; response_revision:640; }","duration":"241.678092ms","start":"2025-09-08T12:19:29.478830Z","end":"2025-09-08T12:19:29.720508Z","steps":["trace[1064292341] 'agreement among raft nodes before linearized reading'  (duration: 241.493996ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T12:19:29.720894Z","caller":"traceutil/trace.go:172","msg":"trace[1327411974] transaction","detail":"{read_only:false; response_revision:641; number_of_response:1; }","duration":"264.672987ms","start":"2025-09-08T12:19:29.456095Z","end":"2025-09-08T12:19:29.720768Z","steps":["trace[1327411974] 'process raft request'  (duration: 264.461992ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:21:11 up 7 min,  0 users,  load average: 0.24, 0.45, 0.26
	Linux multinode-818700 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep  4 13:14:36 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kindnet [0e97a2b4abd9] <==
	I0908 12:20:01.604327       1 main.go:324] Node multinode-818700-m02 has CIDR [10.244.1.0/24] 
	I0908 12:20:11.609097       1 main.go:297] Handling node with IPs: map[172.20.50.55:{}]
	I0908 12:20:11.609205       1 main.go:301] handling current node
	I0908 12:20:11.609328       1 main.go:297] Handling node with IPs: map[172.20.62.186:{}]
	I0908 12:20:11.609340       1 main.go:324] Node multinode-818700-m02 has CIDR [10.244.1.0/24] 
	I0908 12:20:21.605052       1 main.go:297] Handling node with IPs: map[172.20.50.55:{}]
	I0908 12:20:21.605102       1 main.go:301] handling current node
	I0908 12:20:21.605121       1 main.go:297] Handling node with IPs: map[172.20.62.186:{}]
	I0908 12:20:21.605128       1 main.go:324] Node multinode-818700-m02 has CIDR [10.244.1.0/24] 
	I0908 12:20:31.609465       1 main.go:297] Handling node with IPs: map[172.20.50.55:{}]
	I0908 12:20:31.609612       1 main.go:301] handling current node
	I0908 12:20:31.609677       1 main.go:297] Handling node with IPs: map[172.20.62.186:{}]
	I0908 12:20:31.609687       1 main.go:324] Node multinode-818700-m02 has CIDR [10.244.1.0/24] 
	I0908 12:20:41.610954       1 main.go:297] Handling node with IPs: map[172.20.50.55:{}]
	I0908 12:20:41.610990       1 main.go:301] handling current node
	I0908 12:20:41.611007       1 main.go:297] Handling node with IPs: map[172.20.62.186:{}]
	I0908 12:20:41.611013       1 main.go:324] Node multinode-818700-m02 has CIDR [10.244.1.0/24] 
	I0908 12:20:51.608564       1 main.go:297] Handling node with IPs: map[172.20.50.55:{}]
	I0908 12:20:51.608774       1 main.go:301] handling current node
	I0908 12:20:51.608798       1 main.go:297] Handling node with IPs: map[172.20.62.186:{}]
	I0908 12:20:51.608806       1 main.go:324] Node multinode-818700-m02 has CIDR [10.244.1.0/24] 
	I0908 12:21:01.611470       1 main.go:297] Handling node with IPs: map[172.20.50.55:{}]
	I0908 12:21:01.611579       1 main.go:301] handling current node
	I0908 12:21:01.611599       1 main.go:297] Handling node with IPs: map[172.20.62.186:{}]
	I0908 12:21:01.611606       1 main.go:324] Node multinode-818700-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [3ae48749732c] <==
	I0908 12:16:06.830558       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0908 12:16:06.887489       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0908 12:16:06.937453       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0908 12:16:11.745452       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0908 12:16:11.930542       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0908 12:16:12.346133       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 12:16:12.439502       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 12:17:09.818746       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:17:14.345058       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:18:22.037755       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:18:31.040518       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:19:33.130362       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:19:40.787821       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0908 12:20:25.300245       1 conn.go:339] Error on socket receive: read tcp 172.20.50.55:8443->172.20.48.1:52069: use of closed network connection
	E0908 12:20:25.829171       1 conn.go:339] Error on socket receive: read tcp 172.20.50.55:8443->172.20.48.1:52072: use of closed network connection
	E0908 12:20:26.449797       1 conn.go:339] Error on socket receive: read tcp 172.20.50.55:8443->172.20.48.1:52074: use of closed network connection
	E0908 12:20:26.940122       1 conn.go:339] Error on socket receive: read tcp 172.20.50.55:8443->172.20.48.1:52076: use of closed network connection
	E0908 12:20:27.466410       1 conn.go:339] Error on socket receive: read tcp 172.20.50.55:8443->172.20.48.1:52078: use of closed network connection
	E0908 12:20:28.004713       1 conn.go:339] Error on socket receive: read tcp 172.20.50.55:8443->172.20.48.1:52080: use of closed network connection
	E0908 12:20:28.996000       1 conn.go:339] Error on socket receive: read tcp 172.20.50.55:8443->172.20.48.1:52083: use of closed network connection
	E0908 12:20:39.523818       1 conn.go:339] Error on socket receive: read tcp 172.20.50.55:8443->172.20.48.1:52085: use of closed network connection
	E0908 12:20:40.011311       1 conn.go:339] Error on socket receive: read tcp 172.20.50.55:8443->172.20.48.1:52087: use of closed network connection
	I0908 12:20:43.363537       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:20:43.694493       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0908 12:20:50.529545       1 conn.go:339] Error on socket receive: read tcp 172.20.50.55:8443->172.20.48.1:52089: use of closed network connection
	
	
	==> kube-controller-manager [4ef5a92069c2] <==
	I0908 12:16:11.089244       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0908 12:16:11.092033       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0908 12:16:11.154213       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0908 12:16:11.154912       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0908 12:16:11.156277       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0908 12:16:11.093365       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0908 12:16:11.164061       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0908 12:16:11.089147       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0908 12:16:11.166715       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 12:16:11.167082       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0908 12:16:11.167270       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0908 12:16:11.106095       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 12:16:11.109843       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0908 12:16:11.109866       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0908 12:16:11.106331       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0908 12:16:11.106571       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0908 12:16:11.204213       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-818700" podCIDRs=["10.244.0.0/24"]
	I0908 12:16:36.089332       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0908 12:19:19.891019       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-818700-m02\" does not exist"
	I0908 12:19:19.946985       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-818700-m02" podCIDRs=["10.244.1.0/24"]
	E0908 12:19:20.013294       1 range_allocator.go:433] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-818700-m02\" is invalid: [spec.podCIDRs: Invalid value: [\"10.244.2.0/24\",\"10.244.1.0/24\"]: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-818700-m02" podCIDRs=["10.244.2.0/24"]
	E0908 12:19:20.013360       1 range_allocator.go:439] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-818700-m02\" is invalid: [spec.podCIDRs: Invalid value: [\"10.244.2.0/24\",\"10.244.1.0/24\"]: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-818700-m02"
	E0908 12:19:20.013400       1 range_allocator.go:252] "Unhandled Error" err="error syncing 'multinode-818700-m02': failed to patch node CIDR: Node \"multinode-818700-m02\" is invalid: [spec.podCIDRs: Invalid value: [\"10.244.2.0/24\",\"10.244.1.0/24\"]: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0908 12:19:21.119846       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-818700-m02"
	I0908 12:19:52.072790       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-818700-m02"
	
	
	==> kube-proxy [a793eb6b8d63] <==
	I0908 12:16:13.616093       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 12:16:13.717598       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 12:16:13.717689       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["172.20.50.55"]
	E0908 12:16:13.718082       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 12:16:13.775671       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0908 12:16:13.775749       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0908 12:16:13.775834       1 server_linux.go:132] "Using iptables Proxier"
	I0908 12:16:13.792255       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 12:16:13.792951       1 server.go:527] "Version info" version="v1.34.0"
	I0908 12:16:13.792988       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 12:16:13.794927       1 config.go:200] "Starting service config controller"
	I0908 12:16:13.794963       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 12:16:13.795318       1 config.go:106] "Starting endpoint slice config controller"
	I0908 12:16:13.795358       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 12:16:13.795376       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 12:16:13.795381       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 12:16:13.801611       1 config.go:309] "Starting node config controller"
	I0908 12:16:13.801681       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 12:16:13.801690       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 12:16:13.895726       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 12:16:13.895765       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 12:16:13.895770       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [07ac3a29d931] <==
	E0908 12:16:03.147272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0908 12:16:03.147325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 12:16:03.147404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0908 12:16:03.147478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0908 12:16:03.147530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0908 12:16:04.026388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0908 12:16:04.030740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0908 12:16:04.109598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0908 12:16:04.149611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0908 12:16:04.177331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0908 12:16:04.178825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0908 12:16:04.274373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0908 12:16:04.297466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0908 12:16:04.305787       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0908 12:16:04.343496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0908 12:16:04.391271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 12:16:04.417137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0908 12:16:04.470816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 12:16:04.482472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0908 12:16:04.494585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0908 12:16:04.533153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0908 12:16:04.546388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0908 12:16:04.586267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0908 12:16:04.655917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I0908 12:16:06.211726       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 12:16:11 multinode-818700 kubelet[2779]: I0908 12:16:11.967330    2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7645ef6c-8a22-4f86-9e96-70c0b24ea598-cni-cfg\") pod \"kindnet-5drb9\" (UID: \"7645ef6c-8a22-4f86-9e96-70c0b24ea598\") " pod="kube-system/kindnet-5drb9"
	Sep 08 12:16:11 multinode-818700 kubelet[2779]: I0908 12:16:11.967526    2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7300c145-be03-4dae-93df-7b201133bc8a-kube-proxy\") pod \"kube-proxy-m5ksd\" (UID: \"7300c145-be03-4dae-93df-7b201133bc8a\") " pod="kube-system/kube-proxy-m5ksd"
	Sep 08 12:16:11 multinode-818700 kubelet[2779]: I0908 12:16:11.967563    2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7645ef6c-8a22-4f86-9e96-70c0b24ea598-xtables-lock\") pod \"kindnet-5drb9\" (UID: \"7645ef6c-8a22-4f86-9e96-70c0b24ea598\") " pod="kube-system/kindnet-5drb9"
	Sep 08 12:16:11 multinode-818700 kubelet[2779]: I0908 12:16:11.967805    2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7300c145-be03-4dae-93df-7b201133bc8a-lib-modules\") pod \"kube-proxy-m5ksd\" (UID: \"7300c145-be03-4dae-93df-7b201133bc8a\") " pod="kube-system/kube-proxy-m5ksd"
	Sep 08 12:16:11 multinode-818700 kubelet[2779]: I0908 12:16:11.967892    2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8rjj\" (UniqueName: \"kubernetes.io/projected/7300c145-be03-4dae-93df-7b201133bc8a-kube-api-access-m8rjj\") pod \"kube-proxy-m5ksd\" (UID: \"7300c145-be03-4dae-93df-7b201133bc8a\") " pod="kube-system/kube-proxy-m5ksd"
	Sep 08 12:16:11 multinode-818700 kubelet[2779]: I0908 12:16:11.968019    2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7645ef6c-8a22-4f86-9e96-70c0b24ea598-lib-modules\") pod \"kindnet-5drb9\" (UID: \"7645ef6c-8a22-4f86-9e96-70c0b24ea598\") " pod="kube-system/kindnet-5drb9"
	Sep 08 12:16:11 multinode-818700 kubelet[2779]: I0908 12:16:11.968139    2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7300c145-be03-4dae-93df-7b201133bc8a-xtables-lock\") pod \"kube-proxy-m5ksd\" (UID: \"7300c145-be03-4dae-93df-7b201133bc8a\") " pod="kube-system/kube-proxy-m5ksd"
	Sep 08 12:16:11 multinode-818700 kubelet[2779]: I0908 12:16:11.968337    2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dt62\" (UniqueName: \"kubernetes.io/projected/7645ef6c-8a22-4f86-9e96-70c0b24ea598-kube-api-access-5dt62\") pod \"kindnet-5drb9\" (UID: \"7645ef6c-8a22-4f86-9e96-70c0b24ea598\") " pod="kube-system/kindnet-5drb9"
	Sep 08 12:16:13 multinode-818700 kubelet[2779]: I0908 12:16:13.138201    2779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e1c24e28ed9fb50c52106725cc7d6b9bd5435d78ad5ed25096d4c11698beec2"
	Sep 08 12:16:13 multinode-818700 kubelet[2779]: I0908 12:16:13.515453    2779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf6168c36a0f93e0616e1d1e7e313953e9aed0453422ece66c00c95b8edee4bf"
	Sep 08 12:16:14 multinode-818700 kubelet[2779]: I0908 12:16:14.585089    2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m5ksd" podStartSLOduration=3.585072404 podStartE2EDuration="3.585072404s" podCreationTimestamp="2025-09-08 12:16:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 12:16:14.581958696 +0000 UTC m=+7.822833987" watchObservedRunningTime="2025-09-08 12:16:14.585072404 +0000 UTC m=+7.825947795"
	Sep 08 12:16:35 multinode-818700 kubelet[2779]: I0908 12:16:35.827040    2779 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Sep 08 12:16:35 multinode-818700 kubelet[2779]: I0908 12:16:35.930989    2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-5drb9" podStartSLOduration=18.870822651 podStartE2EDuration="24.930971198s" podCreationTimestamp="2025-09-08 12:16:11 +0000 UTC" firstStartedPulling="2025-09-08 12:16:13.523336212 +0000 UTC m=+6.764211603" lastFinishedPulling="2025-09-08 12:16:19.583484859 +0000 UTC m=+12.824360150" observedRunningTime="2025-09-08 12:16:21.664229065 +0000 UTC m=+14.905104456" watchObservedRunningTime="2025-09-08 12:16:35.930971198 +0000 UTC m=+29.171846589"
	Sep 08 12:16:36 multinode-818700 kubelet[2779]: I0908 12:16:36.077797    2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2pp9\" (UniqueName: \"kubernetes.io/projected/c5177fef-0793-4291-adac-1b9fa372fa06-kube-api-access-r2pp9\") pod \"storage-provisioner\" (UID: \"c5177fef-0793-4291-adac-1b9fa372fa06\") " pod="kube-system/storage-provisioner"
	Sep 08 12:16:36 multinode-818700 kubelet[2779]: I0908 12:16:36.077855    2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd9b9019-0603-4fa5-8b64-d23b1f50d4fe-config-volume\") pod \"coredns-66bc5c9577-svhws\" (UID: \"cd9b9019-0603-4fa5-8b64-d23b1f50d4fe\") " pod="kube-system/coredns-66bc5c9577-svhws"
	Sep 08 12:16:36 multinode-818700 kubelet[2779]: I0908 12:16:36.077878    2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk7rz\" (UniqueName: \"kubernetes.io/projected/cd9b9019-0603-4fa5-8b64-d23b1f50d4fe-kube-api-access-sk7rz\") pod \"coredns-66bc5c9577-svhws\" (UID: \"cd9b9019-0603-4fa5-8b64-d23b1f50d4fe\") " pod="kube-system/coredns-66bc5c9577-svhws"
	Sep 08 12:16:36 multinode-818700 kubelet[2779]: I0908 12:16:36.077904    2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c5177fef-0793-4291-adac-1b9fa372fa06-tmp\") pod \"storage-provisioner\" (UID: \"c5177fef-0793-4291-adac-1b9fa372fa06\") " pod="kube-system/storage-provisioner"
	Sep 08 12:16:36 multinode-818700 kubelet[2779]: I0908 12:16:36.791495    2779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da2be864ea38e76c0f6a99cd466b48d160ca9c84f8fcbfab2dcb59e65cd1c26d"
	Sep 08 12:16:36 multinode-818700 kubelet[2779]: I0908 12:16:36.842339    2779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21dbed80ecd5d278aeb46331df8b2fad7927080af3440fd3a0382fa936530e06"
	Sep 08 12:16:37 multinode-818700 kubelet[2779]: I0908 12:16:37.916178    2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=17.916160802 podStartE2EDuration="17.916160802s" podCreationTimestamp="2025-09-08 12:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 12:16:37.892898906 +0000 UTC m=+31.133774197" watchObservedRunningTime="2025-09-08 12:16:37.916160802 +0000 UTC m=+31.157036093"
	Sep 08 12:16:37 multinode-818700 kubelet[2779]: I0908 12:16:37.916278    2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-svhws" podStartSLOduration=25.91627281 podStartE2EDuration="25.91627281s" podCreationTimestamp="2025-09-08 12:16:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 12:16:37.914873502 +0000 UTC m=+31.155748893" watchObservedRunningTime="2025-09-08 12:16:37.91627281 +0000 UTC m=+31.157148101"
	Sep 08 12:20:18 multinode-818700 kubelet[2779]: I0908 12:20:18.883509    2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78tbl\" (UniqueName: \"kubernetes.io/projected/95c2663a-c807-4987-96e5-c595da610ef5-kube-api-access-78tbl\") pod \"busybox-7b57f96db7-ztvwm\" (UID: \"95c2663a-c807-4987-96e5-c595da610ef5\") " pod="default/busybox-7b57f96db7-ztvwm"
	Sep 08 12:20:23 multinode-818700 kubelet[2779]: I0908 12:20:23.161603    2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7b57f96db7-ztvwm" podStartSLOduration=3.062295698 podStartE2EDuration="5.16158413s" podCreationTimestamp="2025-09-08 12:20:18 +0000 UTC" firstStartedPulling="2025-09-08 12:20:19.696935568 +0000 UTC m=+252.937810959" lastFinishedPulling="2025-09-08 12:20:21.7962241 +0000 UTC m=+255.037099391" observedRunningTime="2025-09-08 12:20:23.160797215 +0000 UTC m=+256.401672506" watchObservedRunningTime="2025-09-08 12:20:23.16158413 +0000 UTC m=+256.402459521"
	Sep 08 12:20:25 multinode-818700 kubelet[2779]: E0908 12:20:25.830114    2779 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:41674->127.0.0.1:44335: write tcp 127.0.0.1:41674->127.0.0.1:44335: write: broken pipe
	Sep 08 12:20:28 multinode-818700 kubelet[2779]: E0908 12:20:28.004396    2779 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:41688->127.0.0.1:44335: write tcp 127.0.0.1:41688->127.0.0.1:44335: write: broken pipe
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-818700 -n multinode-818700
helpers_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-818700 -n multinode-818700: (12.371807s)
helpers_test.go:269: (dbg) Run:  kubectl --context multinode-818700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (56.86s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (439.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-818700
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-818700
E0908 12:37:50.407570   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-818700: (1m40.1286624s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-818700 --wait=true -v=5 --alsologtostderr
E0908 12:38:18.409718   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:40:15.323566   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:42:50.411168   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-818700 --wait=true -v=5 --alsologtostderr: exit status 1 (5m4.0062304s)

                                                
                                                
-- stdout --
	* [multinode-818700] minikube v1.36.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-818700" primary control-plane node in "multinode-818700" cluster
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	
	* Starting "multinode-818700-m02" worker node in "multinode-818700" cluster
	* Found network options:
	  - NO_PROXY=172.20.59.7
	  - NO_PROXY=172.20.59.7
	  - env NO_PROXY=172.20.59.7

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 12:38:01.609371   13072 out.go:360] Setting OutFile to fd 2044 ...
	I0908 12:38:01.692836   13072 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:38:01.692836   13072 out.go:374] Setting ErrFile to fd 2016...
	I0908 12:38:01.692836   13072 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:38:01.711255   13072 out.go:368] Setting JSON to false
	I0908 12:38:01.716512   13072 start.go:130] hostinfo: {"hostname":"minikube6","uptime":303933,"bootTime":1757031148,"procs":182,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6282 Build 19045.6282","kernelVersion":"10.0.19045.6282 Build 19045.6282","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0908 12:38:01.716512   13072 start.go:138] gopshost.Virtualization returned error: not implemented yet
	I0908 12:38:01.846989   13072 out.go:179] * [multinode-818700] minikube v1.36.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
	I0908 12:38:01.929616   13072 notify.go:220] Checking for updates...
	I0908 12:38:01.950624   13072 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0908 12:38:02.036032   13072 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 12:38:02.095942   13072 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0908 12:38:02.102175   13072 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 12:38:02.134779   13072 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 12:38:02.143467   13072 config.go:182] Loaded profile config "multinode-818700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:38:02.143467   13072 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 12:38:07.485139   13072 out.go:179] * Using the hyperv driver based on existing profile
	I0908 12:38:07.541014   13072 start.go:304] selected driver: hyperv
	I0908 12:38:07.541072   13072 start.go:918] validating driver "hyperv" against &{Name:multinode-818700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.34.0 ClusterName:multinode-818700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.50.55 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.62.186 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.63.150 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false i
ngress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:38:07.541072   13072 start.go:929] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 12:38:07.595483   13072 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 12:38:07.595483   13072 cni.go:84] Creating CNI manager for ""
	I0908 12:38:07.595483   13072 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0908 12:38:07.596130   13072 start.go:348] cluster config:
	{Name:multinode-818700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:multinode-818700 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.50.55 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.62.186 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.63.150 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-pr
ovisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:38:07.596491   13072 iso.go:125] acquiring lock: {Name:mk0c8af595f03ef7f7ea249099688f084dfd77f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 12:38:07.609040   13072 out.go:179] * Starting "multinode-818700" primary control-plane node in "multinode-818700" cluster
	I0908 12:38:07.615821   13072 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0908 12:38:07.615821   13072 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0908 12:38:07.615821   13072 cache.go:58] Caching tarball of preloaded images
	I0908 12:38:07.615821   13072 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0908 12:38:07.615821   13072 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0908 12:38:07.615821   13072 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\config.json ...
	I0908 12:38:07.620038   13072 start.go:360] acquireMachinesLock for multinode-818700: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0908 12:38:07.620243   13072 start.go:364] duration metric: took 205.3µs to acquireMachinesLock for "multinode-818700"
	I0908 12:38:07.620737   13072 start.go:96] Skipping create...Using existing machine configuration
	I0908 12:38:07.620737   13072 fix.go:54] fixHost starting: 
	I0908 12:38:07.621553   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:38:10.276640   13072 main.go:141] libmachine: [stdout =====>] : Off
	
	I0908 12:38:10.277405   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:10.277405   13072 fix.go:112] recreateIfNeeded on multinode-818700: state=Stopped err=<nil>
	W0908 12:38:10.277405   13072 fix.go:138] unexpected machine state, will restart: <nil>
	I0908 12:38:10.284458   13072 out.go:252] * Restarting existing hyperv VM for "multinode-818700" ...
	I0908 12:38:10.284458   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-818700
	I0908 12:38:13.306249   13072 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:38:13.307279   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:13.307279   13072 main.go:141] libmachine: Waiting for host to start...
	I0908 12:38:13.307390   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:38:15.565191   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:38:15.565191   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:15.565447   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:38:18.015920   13072 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:38:18.015920   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:19.017102   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:38:21.174693   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:38:21.174693   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:21.175566   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:38:23.826969   13072 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:38:23.826969   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:24.827793   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:38:27.030539   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:38:27.030539   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:27.030683   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:38:29.609925   13072 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:38:29.610108   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:30.611071   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:38:32.810459   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:38:32.811453   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:32.811681   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:38:35.338447   13072 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:38:35.338447   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:36.339407   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:38:38.469302   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:38:38.470082   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:38.470428   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:38:40.838773   13072 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:38:40.838773   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:41.840141   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:38:44.010454   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:38:44.010454   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:44.011157   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:38:46.517691   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:38:46.517829   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:46.520770   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:38:48.581136   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:38:48.581345   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:48.581345   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:38:50.999063   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:38:50.999063   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:50.999063   13072 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\config.json ...
	I0908 12:38:51.004775   13072 machine.go:93] provisionDockerMachine start ...
	I0908 12:38:51.004775   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:38:53.093737   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:38:53.093737   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:53.094750   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:38:55.594798   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:38:55.594798   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:55.600673   13072 main.go:141] libmachine: Using SSH client type: native
	I0908 12:38:55.601478   13072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.59.7 22 <nil> <nil>}
	I0908 12:38:55.601478   13072 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 12:38:55.739535   13072 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0908 12:38:55.739622   13072 buildroot.go:166] provisioning hostname "multinode-818700"
	I0908 12:38:55.739686   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:38:57.784326   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:38:57.784919   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:57.784919   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:39:00.317323   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:39:00.317323   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:00.323466   13072 main.go:141] libmachine: Using SSH client type: native
	I0908 12:39:00.324073   13072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.59.7 22 <nil> <nil>}
	I0908 12:39:00.324073   13072 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-818700 && echo "multinode-818700" | sudo tee /etc/hostname
	I0908 12:39:00.493999   13072 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-818700
	
	I0908 12:39:00.494119   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:39:02.605605   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:39:02.605699   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:02.605766   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:39:05.116295   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:39:05.117202   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:05.123103   13072 main.go:141] libmachine: Using SSH client type: native
	I0908 12:39:05.123804   13072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.59.7 22 <nil> <nil>}
	I0908 12:39:05.123804   13072 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-818700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-818700/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-818700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 12:39:05.284542   13072 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 12:39:05.284598   13072 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0908 12:39:05.284716   13072 buildroot.go:174] setting up certificates
	I0908 12:39:05.284748   13072 provision.go:84] configureAuth start
	I0908 12:39:05.284775   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:39:07.350197   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:39:07.350197   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:07.350197   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:39:09.763695   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:39:09.764664   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:09.764664   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:39:11.758974   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:39:11.759082   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:11.759082   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:39:14.218149   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:39:14.218190   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:14.218190   13072 provision.go:143] copyHostCerts
	I0908 12:39:14.218190   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0908 12:39:14.218887   13072 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0908 12:39:14.218946   13072 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0908 12:39:14.219094   13072 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0908 12:39:14.220684   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0908 12:39:14.221213   13072 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0908 12:39:14.221292   13072 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0908 12:39:14.221292   13072 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0908 12:39:14.222835   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0908 12:39:14.222835   13072 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0908 12:39:14.222835   13072 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0908 12:39:14.223634   13072 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1671 bytes)
	I0908 12:39:14.224342   13072 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-818700 san=[127.0.0.1 172.20.59.7 localhost minikube multinode-818700]
	I0908 12:39:15.272739   13072 provision.go:177] copyRemoteCerts
	I0908 12:39:15.283735   13072 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 12:39:15.283735   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:39:17.264376   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:39:17.264376   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:17.265073   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:39:19.696082   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:39:19.696082   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:19.696632   13072 sshutil.go:53] new ssh client: &{IP:172.20.59.7 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700\id_rsa Username:docker}
	I0908 12:39:19.812688   13072 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.528825s)
	I0908 12:39:19.812810   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0908 12:39:19.813025   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 12:39:19.866024   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0908 12:39:19.866146   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0908 12:39:19.920246   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0908 12:39:19.920994   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 12:39:19.983210   13072 provision.go:87] duration metric: took 14.6982062s to configureAuth
	I0908 12:39:19.983387   13072 buildroot.go:189] setting minikube options for container-runtime
	I0908 12:39:19.984081   13072 config.go:182] Loaded profile config "multinode-818700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:39:19.984081   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:39:22.117502   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:39:22.117502   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:22.118079   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:39:24.586114   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:39:24.586114   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:24.591388   13072 main.go:141] libmachine: Using SSH client type: native
	I0908 12:39:24.591920   13072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.59.7 22 <nil> <nil>}
	I0908 12:39:24.591920   13072 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0908 12:39:24.754929   13072 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0908 12:39:24.754992   13072 buildroot.go:70] root file system type: tmpfs
	I0908 12:39:24.755083   13072 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0908 12:39:24.755083   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:39:26.840715   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:39:26.840715   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:26.840715   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:39:29.433909   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:39:29.433909   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:29.440401   13072 main.go:141] libmachine: Using SSH client type: native
	I0908 12:39:29.440733   13072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.59.7 22 <nil> <nil>}
	I0908 12:39:29.440733   13072 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0908 12:39:29.601362   13072 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0908 12:39:29.602013   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:39:31.642980   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:39:31.642980   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:31.643523   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:39:34.193694   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:39:34.193694   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:34.201447   13072 main.go:141] libmachine: Using SSH client type: native
	I0908 12:39:34.201647   13072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.59.7 22 <nil> <nil>}
	I0908 12:39:34.201647   13072 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0908 12:39:35.845221   13072 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I0908 12:39:35.845221   13072 machine.go:96] duration metric: took 44.8398809s to provisionDockerMachine
	I0908 12:39:35.845221   13072 start.go:293] postStartSetup for "multinode-818700" (driver="hyperv")
	I0908 12:39:35.845221   13072 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 12:39:35.857524   13072 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 12:39:35.857524   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:39:37.898074   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:39:37.898297   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:37.898297   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:39:40.366725   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:39:40.366725   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:40.367546   13072 sshutil.go:53] new ssh client: &{IP:172.20.59.7 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700\id_rsa Username:docker}
	I0908 12:39:40.489516   13072 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6319341s)
	I0908 12:39:40.502061   13072 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 12:39:40.509517   13072 info.go:137] Remote host: Buildroot 2025.02
	I0908 12:39:40.509517   13072 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0908 12:39:40.510080   13072 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0908 12:39:40.511208   13072 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> 116282.pem in /etc/ssl/certs
	I0908 12:39:40.511381   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> /etc/ssl/certs/116282.pem
	I0908 12:39:40.522621   13072 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 12:39:40.542030   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem --> /etc/ssl/certs/116282.pem (1708 bytes)
	I0908 12:39:40.594449   13072 start.go:296] duration metric: took 4.7491682s for postStartSetup
	I0908 12:39:40.594449   13072 fix.go:56] duration metric: took 1m32.97254s for fixHost
	I0908 12:39:40.594449   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:39:42.633699   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:39:42.633699   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:42.634513   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:39:45.259225   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:39:45.259225   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:45.265028   13072 main.go:141] libmachine: Using SSH client type: native
	I0908 12:39:45.265833   13072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.59.7 22 <nil> <nil>}
	I0908 12:39:45.265833   13072 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0908 12:39:45.408187   13072 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757335185.425807946
	
	I0908 12:39:45.408187   13072 fix.go:216] guest clock: 1757335185.425807946
	I0908 12:39:45.408187   13072 fix.go:229] Guest: 2025-09-08 12:39:45.425807946 +0000 UTC Remote: 2025-09-08 12:39:40.5944494 +0000 UTC m=+99.087765001 (delta=4.831358546s)
	I0908 12:39:45.408187   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:39:47.448003   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:39:47.448702   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:47.449268   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:39:49.868797   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:39:49.868797   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:49.873942   13072 main.go:141] libmachine: Using SSH client type: native
	I0908 12:39:49.874930   13072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.59.7 22 <nil> <nil>}
	I0908 12:39:49.874930   13072 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1757335185
	I0908 12:39:50.036490   13072 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Sep  8 12:39:45 UTC 2025
	
	I0908 12:39:50.036490   13072 fix.go:236] clock set: Mon Sep  8 12:39:45 UTC 2025
	 (err=<nil>)
	I0908 12:39:50.036490   13072 start.go:83] releasing machines lock for "multinode-818700", held for 1m42.414956s
	I0908 12:39:50.036490   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:39:52.112588   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:39:52.113221   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:52.113345   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:39:54.656827   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:39:54.656827   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:54.660938   13072 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0908 12:39:54.661061   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:39:54.671561   13072 ssh_runner.go:195] Run: cat /version.json
	I0908 12:39:54.671654   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:39:56.831923   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:39:56.831923   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:56.831923   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:39:56.831923   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:39:56.831923   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:56.832478   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:39:59.467137   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:39:59.467137   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:59.468013   13072 sshutil.go:53] new ssh client: &{IP:172.20.59.7 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700\id_rsa Username:docker}
	I0908 12:39:59.497104   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:39:59.497456   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:59.497835   13072 sshutil.go:53] new ssh client: &{IP:172.20.59.7 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700\id_rsa Username:docker}
	I0908 12:39:59.559215   13072 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8982152s)
	W0908 12:39:59.559300   13072 start.go:868] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0908 12:39:59.593802   13072 ssh_runner.go:235] Completed: cat /version.json: (4.9221789s)
	I0908 12:39:59.605415   13072 ssh_runner.go:195] Run: systemctl --version
	I0908 12:39:59.626758   13072 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0908 12:39:59.637601   13072 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0908 12:39:59.648690   13072 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 12:39:59.681308   13072 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0908 12:39:59.681345   13072 start.go:495] detecting cgroup driver to use...
	I0908 12:39:59.681662   13072 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0908 12:39:59.730084   13072 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0908 12:39:59.730084   13072 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0908 12:39:59.753395   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0908 12:39:59.791282   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0908 12:39:59.812233   13072 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0908 12:39:59.824232   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0908 12:39:59.857037   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 12:39:59.888063   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0908 12:39:59.920569   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 12:39:59.952975   13072 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 12:39:59.988850   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0908 12:40:00.023200   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0908 12:40:00.059143   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0908 12:40:00.094286   13072 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 12:40:00.112906   13072 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0908 12:40:00.124631   13072 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0908 12:40:00.155119   13072 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 12:40:00.186151   13072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:40:00.430601   13072 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0908 12:40:00.500975   13072 start.go:495] detecting cgroup driver to use...
	I0908 12:40:00.510554   13072 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0908 12:40:00.556054   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 12:40:00.591647   13072 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 12:40:00.638099   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 12:40:00.671147   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 12:40:00.706101   13072 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0908 12:40:00.772892   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 12:40:00.799354   13072 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 12:40:00.846910   13072 ssh_runner.go:195] Run: which cri-dockerd
	I0908 12:40:00.866010   13072 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0908 12:40:00.885595   13072 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0908 12:40:00.932179   13072 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0908 12:40:01.158756   13072 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0908 12:40:01.383317   13072 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0908 12:40:01.383680   13072 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0908 12:40:01.432180   13072 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0908 12:40:01.467193   13072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:40:01.696647   13072 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0908 12:40:02.540518   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 12:40:02.577126   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0908 12:40:02.612325   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0908 12:40:02.648945   13072 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0908 12:40:02.875857   13072 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0908 12:40:03.110339   13072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:40:03.347129   13072 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0908 12:40:03.414120   13072 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0908 12:40:03.450822   13072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:40:03.685990   13072 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0908 12:40:03.849661   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0908 12:40:03.872329   13072 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0908 12:40:03.883568   13072 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0908 12:40:03.891848   13072 start.go:563] Will wait 60s for crictl version
	I0908 12:40:03.903165   13072 ssh_runner.go:195] Run: which crictl
	I0908 12:40:03.919122   13072 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 12:40:03.974682   13072 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0908 12:40:03.983714   13072 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0908 12:40:04.025889   13072 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0908 12:40:04.066105   13072 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0908 12:40:04.066143   13072 ip.go:180] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0908 12:40:04.070759   13072 ip.go:194] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0908 12:40:04.070759   13072 ip.go:194] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0908 12:40:04.070759   13072 ip.go:189] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0908 12:40:04.070759   13072 ip.go:215] Found interface: {Index:17 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4f:5e:c2 Flags:up|broadcast|multicast|running}
	I0908 12:40:04.074255   13072 ip.go:218] interface addr: fe80::a43d:dd17:5b4e:e872/64
	I0908 12:40:04.074255   13072 ip.go:218] interface addr: 172.20.48.1/20
	I0908 12:40:04.084128   13072 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0908 12:40:04.091158   13072 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 12:40:04.114381   13072 kubeadm.go:875] updating cluster {Name:multinode-818700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.34.0 ClusterName:multinode-818700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.59.7 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.62.186 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.63.150 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 12:40:04.114381   13072 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0908 12:40:04.124429   13072 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0908 12:40:04.155894   13072 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	kindest/kindnetd:v20250512-df8de77b
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0908 12:40:04.155894   13072 docker.go:621] Images already preloaded, skipping extraction
	I0908 12:40:04.163761   13072 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0908 12:40:04.186783   13072 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	kindest/kindnetd:v20250512-df8de77b
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0908 12:40:04.186783   13072 cache_images.go:85] Images are preloaded, skipping loading
	I0908 12:40:04.186783   13072 kubeadm.go:926] updating node { 172.20.59.7 8443 v1.34.0 docker true true} ...
	I0908 12:40:04.187752   13072 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-818700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.59.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:multinode-818700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 12:40:04.195782   13072 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0908 12:40:04.267110   13072 cni.go:84] Creating CNI manager for ""
	I0908 12:40:04.267110   13072 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0908 12:40:04.267110   13072 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 12:40:04.267110   13072 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.59.7 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-818700 NodeName:multinode-818700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.59.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.59.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 12:40:04.267110   13072 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.59.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-818700"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.20.59.7"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.59.7"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 12:40:04.278029   13072 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 12:40:04.301927   13072 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 12:40:04.312957   13072 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 12:40:04.333290   13072 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0908 12:40:04.368859   13072 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 12:40:04.402521   13072 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I0908 12:40:04.455352   13072 ssh_runner.go:195] Run: grep 172.20.59.7	control-plane.minikube.internal$ /etc/hosts
	I0908 12:40:04.461771   13072 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.59.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 12:40:04.498498   13072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:40:04.744131   13072 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:40:04.783453   13072 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700 for IP: 172.20.59.7
	I0908 12:40:04.783527   13072 certs.go:194] generating shared ca certs ...
	I0908 12:40:04.783527   13072 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:40:04.784489   13072 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0908 12:40:04.784900   13072 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0908 12:40:04.784900   13072 certs.go:256] generating profile certs ...
	I0908 12:40:04.785767   13072 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\client.key
	I0908 12:40:04.785767   13072 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.key.1fdd56c4
	I0908 12:40:04.785767   13072 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.crt.1fdd56c4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.59.7]
	I0908 12:40:04.972131   13072 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.crt.1fdd56c4 ...
	I0908 12:40:04.972131   13072 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.crt.1fdd56c4: {Name:mkf02d81f3a64226491daaedb867425cb601c513 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:40:04.974105   13072 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.key.1fdd56c4 ...
	I0908 12:40:04.974105   13072 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.key.1fdd56c4: {Name:mk33ea48fd7cabb154abff9d71d34b0131ffcb1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:40:04.975121   13072 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.crt.1fdd56c4 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.crt
	I0908 12:40:04.991100   13072 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.key.1fdd56c4 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.key
	I0908 12:40:04.992096   13072 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\proxy-client.key
	I0908 12:40:04.992096   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0908 12:40:04.992845   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0908 12:40:04.993130   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0908 12:40:04.993248   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0908 12:40:04.993248   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0908 12:40:04.993248   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0908 12:40:04.993816   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0908 12:40:04.993877   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0908 12:40:04.993877   13072 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem (1338 bytes)
	W0908 12:40:04.993877   13072 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628_empty.pem, impossibly tiny 0 bytes
	I0908 12:40:04.993877   13072 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0908 12:40:04.994833   13072 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0908 12:40:04.994833   13072 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0908 12:40:04.994833   13072 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1671 bytes)
	I0908 12:40:04.995877   13072 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem (1708 bytes)
	I0908 12:40:04.995877   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> /usr/share/ca-certificates/116282.pem
	I0908 12:40:04.995877   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:40:04.996506   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem -> /usr/share/ca-certificates/11628.pem
	I0908 12:40:04.997762   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 12:40:05.057751   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0908 12:40:05.114963   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 12:40:05.166146   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0908 12:40:05.220827   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0908 12:40:05.269954   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 12:40:05.319080   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 12:40:05.375087   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 12:40:05.424402   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem --> /usr/share/ca-certificates/116282.pem (1708 bytes)
	I0908 12:40:05.475078   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 12:40:05.526604   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem --> /usr/share/ca-certificates/11628.pem (1338 bytes)
	I0908 12:40:05.579086   13072 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 12:40:05.623528   13072 ssh_runner.go:195] Run: openssl version
	I0908 12:40:05.642660   13072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 12:40:05.673936   13072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:40:05.681087   13072 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:40:05.691417   13072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:40:05.711145   13072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 12:40:05.738334   13072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11628.pem && ln -fs /usr/share/ca-certificates/11628.pem /etc/ssl/certs/11628.pem"
	I0908 12:40:05.773240   13072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11628.pem
	I0908 12:40:05.781117   13072 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 10:54 /usr/share/ca-certificates/11628.pem
	I0908 12:40:05.791901   13072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11628.pem
	I0908 12:40:05.814750   13072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11628.pem /etc/ssl/certs/51391683.0"
	I0908 12:40:05.855413   13072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116282.pem && ln -fs /usr/share/ca-certificates/116282.pem /etc/ssl/certs/116282.pem"
	I0908 12:40:05.887694   13072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116282.pem
	I0908 12:40:05.895785   13072 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 10:54 /usr/share/ca-certificates/116282.pem
	I0908 12:40:05.907249   13072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116282.pem
	I0908 12:40:05.928762   13072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116282.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 12:40:05.963369   13072 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 12:40:05.981490   13072 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 12:40:06.003106   13072 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 12:40:06.025167   13072 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 12:40:06.047011   13072 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 12:40:06.068293   13072 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 12:40:06.090004   13072 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 12:40:06.102240   13072 kubeadm.go:392] StartCluster: {Name:multinode-818700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
4.0 ClusterName:multinode-818700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.59.7 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.62.186 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.63.150 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false in
gress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:40:06.110639   13072 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0908 12:40:06.152241   13072 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 12:40:06.175947   13072 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0908 12:40:06.175947   13072 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0908 12:40:06.186834   13072 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0908 12:40:06.206845   13072 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0908 12:40:06.206845   13072 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-818700" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0908 12:40:06.206845   13072 kubeconfig.go:62] C:\Users\jenkins.minikube6\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-818700" cluster setting kubeconfig missing "multinode-818700" context setting]
	I0908 12:40:06.206845   13072 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:40:06.231365   13072 kapi.go:59] client config for multinode-818700: &rest.Config{Host:"https://172.20.59.7:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-818700/client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-818700/client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2a967c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0908 12:40:06.232788   13072 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0908 12:40:06.232788   13072 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0908 12:40:06.232788   13072 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0908 12:40:06.232788   13072 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0908 12:40:06.232788   13072 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0908 12:40:06.232788   13072 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0908 12:40:06.246485   13072 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0908 12:40:06.266002   13072 kubeadm.go:636] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.20.50.55
	+  advertiseAddress: 172.20.59.7
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -15,13 +15,13 @@
	   name: "multinode-818700"
	   kubeletExtraArgs:
	     - name: "node-ip"
	-      value: "172.20.50.55"
	+      value: "172.20.59.7"
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.20.50.55"]
	+  certSANs: ["127.0.0.1", "localhost", "172.20.59.7"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	       value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	
	-- /stdout --
	I0908 12:40:06.266002   13072 kubeadm.go:1152] stopping kube-system containers ...
	I0908 12:40:06.274035   13072 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0908 12:40:06.301860   13072 docker.go:484] Stopping containers: [4b397652bed6 51939f01ba77 da2be864ea38 21dbed80ecd5 0e97a2b4abd9 a793eb6b8d63 cf6168c36a0f 7e1c24e28ed9 4ef5a92069c2 07ac3a29d931 3ae48749732c 19b41e0f8bcf e229cb205b5d a817a17208da 62d71a5295d0 1b891663f1f6]
	I0908 12:40:06.311621   13072 ssh_runner.go:195] Run: docker stop 4b397652bed6 51939f01ba77 da2be864ea38 21dbed80ecd5 0e97a2b4abd9 a793eb6b8d63 cf6168c36a0f 7e1c24e28ed9 4ef5a92069c2 07ac3a29d931 3ae48749732c 19b41e0f8bcf e229cb205b5d a817a17208da 62d71a5295d0 1b891663f1f6
	I0908 12:40:06.353028   13072 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0908 12:40:06.390869   13072 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 12:40:06.410494   13072 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 12:40:06.410494   13072 kubeadm.go:157] found existing configuration files:
	
	I0908 12:40:06.420115   13072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 12:40:06.438809   13072 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 12:40:06.449433   13072 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 12:40:06.477036   13072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 12:40:06.497007   13072 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 12:40:06.508033   13072 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 12:40:06.538974   13072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 12:40:06.559665   13072 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 12:40:06.571328   13072 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 12:40:06.606530   13072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 12:40:06.626203   13072 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 12:40:06.637350   13072 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 12:40:06.669099   13072 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 12:40:06.694075   13072 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 12:40:07.036307   13072 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 12:40:08.669997   13072 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.6335155s)
	I0908 12:40:08.670184   13072 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0908 12:40:09.059358   13072 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 12:40:09.134059   13072 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0908 12:40:09.227450   13072 api_server.go:52] waiting for apiserver process to appear ...
	I0908 12:40:09.238171   13072 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:40:09.742212   13072 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:40:10.236299   13072 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:40:10.739398   13072 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:40:11.238581   13072 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:40:11.291014   13072 api_server.go:72] duration metric: took 2.0636218s to wait for apiserver process to appear ...
	I0908 12:40:11.291101   13072 api_server.go:88] waiting for apiserver healthz status ...
	I0908 12:40:11.291101   13072 api_server.go:253] Checking apiserver healthz at https://172.20.59.7:8443/healthz ...
	I0908 12:40:15.082710   13072 api_server.go:279] https://172.20.59.7:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0908 12:40:15.082784   13072 api_server.go:103] status: https://172.20.59.7:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0908 12:40:15.082884   13072 api_server.go:253] Checking apiserver healthz at https://172.20.59.7:8443/healthz ...
	I0908 12:40:15.118179   13072 api_server.go:279] https://172.20.59.7:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0908 12:40:15.118259   13072 api_server.go:103] status: https://172.20.59.7:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0908 12:40:15.291953   13072 api_server.go:253] Checking apiserver healthz at https://172.20.59.7:8443/healthz ...
	I0908 12:40:15.306441   13072 api_server.go:279] https://172.20.59.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 12:40:15.306486   13072 api_server.go:103] status: https://172.20.59.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 12:40:15.791370   13072 api_server.go:253] Checking apiserver healthz at https://172.20.59.7:8443/healthz ...
	I0908 12:40:15.808740   13072 api_server.go:279] https://172.20.59.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 12:40:15.808740   13072 api_server.go:103] status: https://172.20.59.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 12:40:16.291801   13072 api_server.go:253] Checking apiserver healthz at https://172.20.59.7:8443/healthz ...
	I0908 12:40:16.305118   13072 api_server.go:279] https://172.20.59.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 12:40:16.305118   13072 api_server.go:103] status: https://172.20.59.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 12:40:16.791740   13072 api_server.go:253] Checking apiserver healthz at https://172.20.59.7:8443/healthz ...
	I0908 12:40:16.803012   13072 api_server.go:279] https://172.20.59.7:8443/healthz returned 200:
	ok
	I0908 12:40:16.816031   13072 api_server.go:141] control plane version: v1.34.0
	I0908 12:40:16.816031   13072 api_server.go:131] duration metric: took 5.5248601s to wait for apiserver health ...
	I0908 12:40:16.816031   13072 cni.go:84] Creating CNI manager for ""
	I0908 12:40:16.816031   13072 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0908 12:40:16.819055   13072 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0908 12:40:16.833687   13072 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0908 12:40:16.860278   13072 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0908 12:40:16.860375   13072 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0908 12:40:16.951829   13072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0908 12:40:18.157344   13072 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.2054996s)
	I0908 12:40:18.157344   13072 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 12:40:18.166008   13072 system_pods.go:59] 12 kube-system pods found
	I0908 12:40:18.166008   13072 system_pods.go:61] "coredns-66bc5c9577-svhws" [cd9b9019-0603-4fa5-8b64-d23b1f50d4fe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:40:18.166008   13072 system_pods.go:61] "etcd-multinode-818700" [cf243776-ef17-4460-ac8d-1775558b5246] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 12:40:18.166008   13072 system_pods.go:61] "kindnet-5drb9" [7645ef6c-8a22-4f86-9e96-70c0b24ea598] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0908 12:40:18.166008   13072 system_pods.go:61] "kindnet-chkc2" [114504d7-aec1-449b-9900-a9a3871cdd14] Running
	I0908 12:40:18.166008   13072 system_pods.go:61] "kindnet-jb7kv" [80bdb808-36b1-4069-9919-efcfc7cc5f4b] Running
	I0908 12:40:18.166008   13072 system_pods.go:61] "kube-apiserver-multinode-818700" [27f58fe2-88dc-41bc-9b5e-f5dd0ba551b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 12:40:18.166008   13072 system_pods.go:61] "kube-controller-manager-multinode-818700" [c0ff29cc-9c9b-46a9-a34b-1e3da19a80e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 12:40:18.166008   13072 system_pods.go:61] "kube-proxy-fb8cd" [edcbee45-3ba2-4746-9d0b-321d40a96c25] Running
	I0908 12:40:18.166215   13072 system_pods.go:61] "kube-proxy-m5ksd" [7300c145-be03-4dae-93df-7b201133bc8a] Running
	I0908 12:40:18.166215   13072 system_pods.go:61] "kube-proxy-m9smd" [d16c9eb2-1d38-4652-880b-d217ba193c1a] Running
	I0908 12:40:18.166271   13072 system_pods.go:61] "kube-scheduler-multinode-818700" [a805a7f8-5277-4087-9ccf-2f2afcc47715] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 12:40:18.166271   13072 system_pods.go:61] "storage-provisioner" [c5177fef-0793-4291-adac-1b9fa372fa06] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 12:40:18.166271   13072 system_pods.go:74] duration metric: took 8.9268ms to wait for pod list to return data ...
	I0908 12:40:18.166271   13072 node_conditions.go:102] verifying NodePressure condition ...
	I0908 12:40:18.169954   13072 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 12:40:18.170014   13072 node_conditions.go:123] node cpu capacity is 2
	I0908 12:40:18.170014   13072 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 12:40:18.170014   13072 node_conditions.go:123] node cpu capacity is 2
	I0908 12:40:18.170014   13072 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 12:40:18.170071   13072 node_conditions.go:123] node cpu capacity is 2
	I0908 12:40:18.170071   13072 node_conditions.go:105] duration metric: took 3.8007ms to run NodePressure ...
	I0908 12:40:18.170071   13072 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 12:40:18.926578   13072 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0908 12:40:18.944365   13072 kubeadm.go:735] kubelet initialised
	I0908 12:40:18.944365   13072 kubeadm.go:736] duration metric: took 17.7863ms waiting for restarted kubelet to initialise ...
	I0908 12:40:18.944365   13072 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 12:40:18.973797   13072 ops.go:34] apiserver oom_adj: -16
	I0908 12:40:18.973834   13072 kubeadm.go:593] duration metric: took 12.7977257s to restartPrimaryControlPlane
	I0908 12:40:18.973896   13072 kubeadm.go:394] duration metric: took 12.8714944s to StartCluster
	I0908 12:40:18.973934   13072 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:40:18.974225   13072 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0908 12:40:18.975921   13072 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:40:18.977816   13072 start.go:235] Will wait 6m0s for node &{Name: IP:172.20.59.7 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0908 12:40:18.977816   13072 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 12:40:18.978089   13072 config.go:182] Loaded profile config "multinode-818700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:40:18.983185   13072 out.go:179] * Enabled addons: 
	I0908 12:40:18.985894   13072 out.go:179] * Verifying Kubernetes components...
	I0908 12:40:18.988443   13072 addons.go:514] duration metric: took 10.664ms for enable addons: enabled=[]
	I0908 12:40:19.000533   13072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:40:19.328426   13072 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:40:19.362166   13072 node_ready.go:35] waiting up to 6m0s for node "multinode-818700" to be "Ready" ...
	W0908 12:40:21.367954   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:23.368753   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:25.369958   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:27.868237   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:29.869669   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:31.870381   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:33.871524   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:36.368864   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:38.870431   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:41.368050   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:43.368322   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:45.369241   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:47.868719   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:49.870431   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:52.368482   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:54.369276   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:56.867996   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:58.869035   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	I0908 12:40:59.369470   13072 node_ready.go:49] node "multinode-818700" is "Ready"
	I0908 12:40:59.369545   13072 node_ready.go:38] duration metric: took 40.0059439s for node "multinode-818700" to be "Ready" ...
	I0908 12:40:59.369618   13072 api_server.go:52] waiting for apiserver process to appear ...
	I0908 12:40:59.380718   13072 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:40:59.426156   13072 api_server.go:72] duration metric: took 40.4477113s to wait for apiserver process to appear ...
	I0908 12:40:59.426156   13072 api_server.go:88] waiting for apiserver healthz status ...
	I0908 12:40:59.426156   13072 api_server.go:253] Checking apiserver healthz at https://172.20.59.7:8443/healthz ...
	I0908 12:40:59.434287   13072 api_server.go:279] https://172.20.59.7:8443/healthz returned 200:
	ok
	I0908 12:40:59.436343   13072 api_server.go:141] control plane version: v1.34.0
	I0908 12:40:59.436343   13072 api_server.go:131] duration metric: took 10.1871ms to wait for apiserver health ...
	I0908 12:40:59.436343   13072 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 12:40:59.446298   13072 system_pods.go:59] 12 kube-system pods found
	I0908 12:40:59.446298   13072 system_pods.go:61] "coredns-66bc5c9577-svhws" [cd9b9019-0603-4fa5-8b64-d23b1f50d4fe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:40:59.446298   13072 system_pods.go:61] "etcd-multinode-818700" [cf243776-ef17-4460-ac8d-1775558b5246] Running
	I0908 12:40:59.446298   13072 system_pods.go:61] "kindnet-5drb9" [7645ef6c-8a22-4f86-9e96-70c0b24ea598] Running
	I0908 12:40:59.446298   13072 system_pods.go:61] "kindnet-chkc2" [114504d7-aec1-449b-9900-a9a3871cdd14] Running
	I0908 12:40:59.446298   13072 system_pods.go:61] "kindnet-jb7kv" [80bdb808-36b1-4069-9919-efcfc7cc5f4b] Running
	I0908 12:40:59.446298   13072 system_pods.go:61] "kube-apiserver-multinode-818700" [27f58fe2-88dc-41bc-9b5e-f5dd0ba551b7] Running
	I0908 12:40:59.446298   13072 system_pods.go:61] "kube-controller-manager-multinode-818700" [c0ff29cc-9c9b-46a9-a34b-1e3da19a80e2] Running
	I0908 12:40:59.446298   13072 system_pods.go:61] "kube-proxy-fb8cd" [edcbee45-3ba2-4746-9d0b-321d40a96c25] Running
	I0908 12:40:59.446298   13072 system_pods.go:61] "kube-proxy-m5ksd" [7300c145-be03-4dae-93df-7b201133bc8a] Running
	I0908 12:40:59.446844   13072 system_pods.go:61] "kube-proxy-m9smd" [d16c9eb2-1d38-4652-880b-d217ba193c1a] Running
	I0908 12:40:59.446844   13072 system_pods.go:61] "kube-scheduler-multinode-818700" [a805a7f8-5277-4087-9ccf-2f2afcc47715] Running
	I0908 12:40:59.446844   13072 system_pods.go:61] "storage-provisioner" [c5177fef-0793-4291-adac-1b9fa372fa06] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 12:40:59.446844   13072 system_pods.go:74] duration metric: took 10.5005ms to wait for pod list to return data ...
	I0908 12:40:59.447105   13072 default_sa.go:34] waiting for default service account to be created ...
	I0908 12:40:59.452429   13072 default_sa.go:45] found service account: "default"
	I0908 12:40:59.452429   13072 default_sa.go:55] duration metric: took 5.3246ms for default service account to be created ...
	I0908 12:40:59.452429   13072 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 12:40:59.457399   13072 system_pods.go:86] 12 kube-system pods found
	I0908 12:40:59.457419   13072 system_pods.go:89] "coredns-66bc5c9577-svhws" [cd9b9019-0603-4fa5-8b64-d23b1f50d4fe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:40:59.457419   13072 system_pods.go:89] "etcd-multinode-818700" [cf243776-ef17-4460-ac8d-1775558b5246] Running
	I0908 12:40:59.457419   13072 system_pods.go:89] "kindnet-5drb9" [7645ef6c-8a22-4f86-9e96-70c0b24ea598] Running
	I0908 12:40:59.457474   13072 system_pods.go:89] "kindnet-chkc2" [114504d7-aec1-449b-9900-a9a3871cdd14] Running
	I0908 12:40:59.457474   13072 system_pods.go:89] "kindnet-jb7kv" [80bdb808-36b1-4069-9919-efcfc7cc5f4b] Running
	I0908 12:40:59.457474   13072 system_pods.go:89] "kube-apiserver-multinode-818700" [27f58fe2-88dc-41bc-9b5e-f5dd0ba551b7] Running
	I0908 12:40:59.457474   13072 system_pods.go:89] "kube-controller-manager-multinode-818700" [c0ff29cc-9c9b-46a9-a34b-1e3da19a80e2] Running
	I0908 12:40:59.457474   13072 system_pods.go:89] "kube-proxy-fb8cd" [edcbee45-3ba2-4746-9d0b-321d40a96c25] Running
	I0908 12:40:59.457474   13072 system_pods.go:89] "kube-proxy-m5ksd" [7300c145-be03-4dae-93df-7b201133bc8a] Running
	I0908 12:40:59.457474   13072 system_pods.go:89] "kube-proxy-m9smd" [d16c9eb2-1d38-4652-880b-d217ba193c1a] Running
	I0908 12:40:59.457474   13072 system_pods.go:89] "kube-scheduler-multinode-818700" [a805a7f8-5277-4087-9ccf-2f2afcc47715] Running
	I0908 12:40:59.457474   13072 system_pods.go:89] "storage-provisioner" [c5177fef-0793-4291-adac-1b9fa372fa06] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 12:40:59.457544   13072 system_pods.go:126] duration metric: took 5.1141ms to wait for k8s-apps to be running ...
	I0908 12:40:59.457544   13072 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 12:40:59.468181   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 12:40:59.497418   13072 system_svc.go:56] duration metric: took 39.7171ms WaitForService to wait for kubelet
	I0908 12:40:59.497418   13072 kubeadm.go:578] duration metric: took 40.5189724s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 12:40:59.497418   13072 node_conditions.go:102] verifying NodePressure condition ...
	I0908 12:40:59.502061   13072 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 12:40:59.502139   13072 node_conditions.go:123] node cpu capacity is 2
	I0908 12:40:59.502175   13072 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 12:40:59.502175   13072 node_conditions.go:123] node cpu capacity is 2
	I0908 12:40:59.502175   13072 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 12:40:59.502175   13072 node_conditions.go:123] node cpu capacity is 2
	I0908 12:40:59.502175   13072 node_conditions.go:105] duration metric: took 4.7567ms to run NodePressure ...
	I0908 12:40:59.502175   13072 start.go:241] waiting for startup goroutines ...
	I0908 12:40:59.502267   13072 start.go:246] waiting for cluster config update ...
	I0908 12:40:59.502267   13072 start.go:255] writing updated cluster config ...
	I0908 12:40:59.506150   13072 out.go:203] 
	I0908 12:40:59.509685   13072 config.go:182] Loaded profile config "ha-331000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:40:59.528241   13072 config.go:182] Loaded profile config "multinode-818700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:40:59.528241   13072 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\config.json ...
	I0908 12:40:59.536613   13072 out.go:179] * Starting "multinode-818700-m02" worker node in "multinode-818700" cluster
	I0908 12:40:59.540466   13072 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0908 12:40:59.540466   13072 cache.go:58] Caching tarball of preloaded images
	I0908 12:40:59.540772   13072 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0908 12:40:59.540772   13072 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0908 12:40:59.541299   13072 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\config.json ...
	I0908 12:40:59.543740   13072 start.go:360] acquireMachinesLock for multinode-818700-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0908 12:40:59.543740   13072 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-818700-m02"
	I0908 12:40:59.543740   13072 start.go:96] Skipping create...Using existing machine configuration
	I0908 12:40:59.543740   13072 fix.go:54] fixHost starting: m02
	I0908 12:40:59.544565   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:41:01.629753   13072 main.go:141] libmachine: [stdout =====>] : Off
	
	I0908 12:41:01.629842   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:01.629842   13072 fix.go:112] recreateIfNeeded on multinode-818700-m02: state=Stopped err=<nil>
	W0908 12:41:01.629842   13072 fix.go:138] unexpected machine state, will restart: <nil>
	I0908 12:41:01.635905   13072 out.go:252] * Restarting existing hyperv VM for "multinode-818700-m02" ...
	I0908 12:41:01.635905   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-818700-m02
	I0908 12:41:04.662338   13072 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:41:04.662338   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:04.662338   13072 main.go:141] libmachine: Waiting for host to start...
	I0908 12:41:04.662338   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:41:07.001780   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:41:07.001780   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:07.002755   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:41:09.599516   13072 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:41:09.599516   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:10.599938   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:41:12.744066   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:41:12.744066   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:12.744066   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:41:15.284831   13072 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:41:15.284831   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:16.285356   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:41:18.520675   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:41:18.520675   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:18.521334   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:41:21.071572   13072 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:41:21.071572   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:22.072225   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:41:24.264384   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:41:24.264384   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:24.264488   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:41:26.755832   13072 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:41:26.755832   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:27.756184   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:41:30.009178   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:41:30.009178   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:30.009724   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:41:32.483427   13072 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:41:32.483427   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:33.484341   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:41:35.696167   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:41:35.697199   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:35.697352   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:41:38.293622   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:41:38.293986   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:38.297103   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:41:40.386086   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:41:40.387236   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:40.387236   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:41:42.921144   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:41:42.921144   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:42.921542   13072 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\config.json ...
	I0908 12:41:42.924489   13072 machine.go:93] provisionDockerMachine start ...
	I0908 12:41:42.924546   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:41:45.034436   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:41:45.034436   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:45.034436   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:41:47.561523   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:41:47.561832   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:47.568550   13072 main.go:141] libmachine: Using SSH client type: native
	I0908 12:41:47.568712   13072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.54.47 22 <nil> <nil>}
	I0908 12:41:47.568712   13072 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 12:41:47.696061   13072 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0908 12:41:47.696061   13072 buildroot.go:166] provisioning hostname "multinode-818700-m02"
	I0908 12:41:47.696061   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:41:49.841810   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:41:49.842022   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:49.842122   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:41:52.321745   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:41:52.321745   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:52.328085   13072 main.go:141] libmachine: Using SSH client type: native
	I0908 12:41:52.328797   13072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.54.47 22 <nil> <nil>}
	I0908 12:41:52.328913   13072 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-818700-m02 && echo "multinode-818700-m02" | sudo tee /etc/hostname
	I0908 12:41:52.491464   13072 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-818700-m02
	
	I0908 12:41:52.491464   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:41:54.554780   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:41:54.555673   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:54.555673   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:41:57.010949   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:41:57.010949   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:57.016622   13072 main.go:141] libmachine: Using SSH client type: native
	I0908 12:41:57.017048   13072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.54.47 22 <nil> <nil>}
	I0908 12:41:57.017048   13072 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-818700-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-818700-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-818700-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 12:41:57.164603   13072 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 12:41:57.164673   13072 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0908 12:41:57.164673   13072 buildroot.go:174] setting up certificates
	I0908 12:41:57.164673   13072 provision.go:84] configureAuth start
	I0908 12:41:57.164775   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:41:59.335506   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:41:59.336263   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:59.336263   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:42:01.827652   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:42:01.827652   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:01.827652   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:42:03.906965   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:42:03.906965   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:03.907602   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:42:06.409304   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:42:06.409304   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:06.409611   13072 provision.go:143] copyHostCerts
	I0908 12:42:06.409782   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0908 12:42:06.410212   13072 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0908 12:42:06.410212   13072 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0908 12:42:06.410769   13072 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0908 12:42:06.412045   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0908 12:42:06.412382   13072 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0908 12:42:06.412382   13072 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0908 12:42:06.412833   13072 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0908 12:42:06.413831   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0908 12:42:06.414196   13072 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0908 12:42:06.414196   13072 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0908 12:42:06.414518   13072 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1671 bytes)
	I0908 12:42:06.415554   13072 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-818700-m02 san=[127.0.0.1 172.20.54.47 localhost minikube multinode-818700-m02]
	I0908 12:42:06.579163   13072 provision.go:177] copyRemoteCerts
	I0908 12:42:06.591241   13072 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 12:42:06.591359   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:42:08.686466   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:42:08.686736   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:08.686736   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:42:11.232959   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:42:11.234039   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:11.234598   13072 sshutil.go:53] new ssh client: &{IP:172.20.54.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700-m02\id_rsa Username:docker}
	I0908 12:42:11.336203   13072 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7446556s)
	I0908 12:42:11.336203   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0908 12:42:11.336797   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 12:42:11.387615   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0908 12:42:11.388393   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0908 12:42:11.441905   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0908 12:42:11.441905   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 12:42:11.495932   13072 provision.go:87] duration metric: took 14.3310139s to configureAuth
	I0908 12:42:11.495932   13072 buildroot.go:189] setting minikube options for container-runtime
	I0908 12:42:11.496804   13072 config.go:182] Loaded profile config "multinode-818700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:42:11.496902   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:42:13.593590   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:42:13.593651   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:13.593651   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:42:16.086198   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:42:16.086198   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:16.092032   13072 main.go:141] libmachine: Using SSH client type: native
	I0908 12:42:16.092362   13072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.54.47 22 <nil> <nil>}
	I0908 12:42:16.092362   13072 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0908 12:42:16.223149   13072 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0908 12:42:16.223227   13072 buildroot.go:70] root file system type: tmpfs
	I0908 12:42:16.223415   13072 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0908 12:42:16.223545   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:42:18.293106   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:42:18.293106   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:18.293819   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:42:20.803122   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:42:20.803122   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:20.809379   13072 main.go:141] libmachine: Using SSH client type: native
	I0908 12:42:20.810385   13072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.54.47 22 <nil> <nil>}
	I0908 12:42:20.810566   13072 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=172.20.59.7"
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0908 12:42:20.961280   13072 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=172.20.59.7
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0908 12:42:20.961280   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:42:23.030201   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:42:23.031313   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:23.031341   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:42:25.534657   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:42:25.534657   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:25.541890   13072 main.go:141] libmachine: Using SSH client type: native
	I0908 12:42:25.542557   13072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.54.47 22 <nil> <nil>}
	I0908 12:42:25.542633   13072 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0908 12:42:27.018819   13072 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I0908 12:42:27.018819   13072 machine.go:96] duration metric: took 44.0937181s to provisionDockerMachine
	I0908 12:42:27.018819   13072 start.go:293] postStartSetup for "multinode-818700-m02" (driver="hyperv")
	I0908 12:42:27.018819   13072 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 12:42:27.032545   13072 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 12:42:27.032545   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:42:29.223462   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:42:29.223462   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:29.223897   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:42:31.711767   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:42:31.712204   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:31.712876   13072 sshutil.go:53] new ssh client: &{IP:172.20.54.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700-m02\id_rsa Username:docker}
	I0908 12:42:31.818924   13072 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7863189s)
	I0908 12:42:31.830624   13072 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 12:42:31.837254   13072 info.go:137] Remote host: Buildroot 2025.02
	I0908 12:42:31.837319   13072 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0908 12:42:31.837884   13072 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0908 12:42:31.839245   13072 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> 116282.pem in /etc/ssl/certs
	I0908 12:42:31.839245   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> /etc/ssl/certs/116282.pem
	I0908 12:42:31.850245   13072 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 12:42:31.871530   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem --> /etc/ssl/certs/116282.pem (1708 bytes)
	I0908 12:42:31.922079   13072 start.go:296] duration metric: took 4.9031977s for postStartSetup
	I0908 12:42:31.922079   13072 fix.go:56] duration metric: took 1m32.377175s for fixHost
	I0908 12:42:31.922205   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:42:34.014982   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:42:34.014982   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:34.015112   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:42:36.532954   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:42:36.534013   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:36.540324   13072 main.go:141] libmachine: Using SSH client type: native
	I0908 12:42:36.540869   13072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.54.47 22 <nil> <nil>}
	I0908 12:42:36.540869   13072 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0908 12:42:36.669940   13072 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757335356.686959859
	
	I0908 12:42:36.670024   13072 fix.go:216] guest clock: 1757335356.686959859
	I0908 12:42:36.670024   13072 fix.go:229] Guest: 2025-09-08 12:42:36.686959859 +0000 UTC Remote: 2025-09-08 12:42:31.9220793 +0000 UTC m=+270.413236201 (delta=4.764880559s)
	I0908 12:42:36.670024   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:42:38.749729   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:42:38.750645   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:38.750753   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:42:41.257010   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:42:41.257010   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:41.263862   13072 main.go:141] libmachine: Using SSH client type: native
	I0908 12:42:41.264582   13072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.54.47 22 <nil> <nil>}
	I0908 12:42:41.264582   13072 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1757335356
	I0908 12:42:41.423551   13072 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Sep  8 12:42:36 UTC 2025
	
	I0908 12:42:41.423612   13072 fix.go:236] clock set: Mon Sep  8 12:42:36 UTC 2025
	 (err=<nil>)
	I0908 12:42:41.423612   13072 start.go:83] releasing machines lock for "multinode-818700-m02", held for 1m41.8785887s
	I0908 12:42:41.423855   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:42:43.482359   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:42:43.482359   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:43.482828   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:42:45.973251   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:42:45.973407   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:45.978162   13072 out.go:179] * Found network options:
	I0908 12:42:45.980594   13072 out.go:179]   - NO_PROXY=172.20.59.7
	W0908 12:42:45.983468   13072 proxy.go:120] fail to check proxy env: Error ip not in block
	I0908 12:42:45.987153   13072 out.go:179]   - NO_PROXY=172.20.59.7
	W0908 12:42:45.992629   13072 proxy.go:120] fail to check proxy env: Error ip not in block
	W0908 12:42:45.994581   13072 proxy.go:120] fail to check proxy env: Error ip not in block
	I0908 12:42:45.997447   13072 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0908 12:42:45.997447   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:42:46.011191   13072 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 12:42:46.011191   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:42:48.157462   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:42:48.157462   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:48.157462   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:42:48.157462   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:48.157462   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:42:48.158369   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:42:50.849454   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:42:50.850270   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:50.850777   13072 sshutil.go:53] new ssh client: &{IP:172.20.54.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700-m02\id_rsa Username:docker}
	I0908 12:42:50.886428   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:42:50.886428   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:50.887861   13072 sshutil.go:53] new ssh client: &{IP:172.20.54.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700-m02\id_rsa Username:docker}
	I0908 12:42:50.942007   13072 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.9307533s)
	W0908 12:42:50.942144   13072 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0908 12:42:50.952857   13072 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 12:42:50.958356   13072 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9608462s)
	W0908 12:42:50.958356   13072 start.go:868] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0908 12:42:50.991280   13072 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0908 12:42:50.991280   13072 start.go:495] detecting cgroup driver to use...
	I0908 12:42:50.991508   13072 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 12:42:51.038877   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0908 12:42:51.073139   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0908 12:42:51.078025   13072 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0908 12:42:51.078434   13072 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0908 12:42:51.098004   13072 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0908 12:42:51.108947   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0908 12:42:51.143600   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 12:42:51.175698   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0908 12:42:51.208739   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 12:42:51.242449   13072 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 12:42:51.278753   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0908 12:42:51.319913   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0908 12:42:51.352933   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0908 12:42:51.385498   13072 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 12:42:51.404667   13072 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0908 12:42:51.416405   13072 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0908 12:42:51.449757   13072 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 12:42:51.481799   13072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:42:51.709648   13072 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0908 12:42:51.772715   13072 start.go:495] detecting cgroup driver to use...
	I0908 12:42:51.783418   13072 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0908 12:42:51.823583   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 12:42:51.861663   13072 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 12:42:51.907072   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 12:42:51.947861   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 12:42:51.990473   13072 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0908 12:42:52.064724   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 12:42:52.092720   13072 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 12:42:52.139508   13072 ssh_runner.go:195] Run: which cri-dockerd
	I0908 12:42:52.159518   13072 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0908 12:42:52.178994   13072 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0908 12:42:52.231638   13072 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0908 12:42:52.473156   13072 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0908 12:42:52.690788   13072 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0908 12:42:52.690880   13072 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0908 12:42:52.740223   13072 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0908 12:42:52.777385   13072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:42:53.007826   13072 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0908 12:42:53.797488   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 12:42:53.835101   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0908 12:42:53.869922   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0908 12:42:53.907002   13072 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0908 12:42:54.134769   13072 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0908 12:42:54.358521   13072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:42:54.577981   13072 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0908 12:42:54.648450   13072 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0908 12:42:54.695724   13072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:42:54.915367   13072 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0908 12:42:55.075876   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0908 12:42:55.099945   13072 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0908 12:42:55.111838   13072 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0908 12:42:55.120287   13072 start.go:563] Will wait 60s for crictl version
	I0908 12:42:55.130557   13072 ssh_runner.go:195] Run: which crictl
	I0908 12:42:55.147985   13072 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 12:42:55.201709   13072 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0908 12:42:55.211253   13072 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0908 12:42:55.253651   13072 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0908 12:42:55.296256   13072 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0908 12:42:55.298783   13072 out.go:179]   - env NO_PROXY=172.20.59.7
	I0908 12:42:55.301327   13072 ip.go:180] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0908 12:42:55.304392   13072 ip.go:194] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0908 12:42:55.304392   13072 ip.go:194] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0908 12:42:55.304392   13072 ip.go:189] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0908 12:42:55.304392   13072 ip.go:215] Found interface: {Index:17 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4f:5e:c2 Flags:up|broadcast|multicast|running}
	I0908 12:42:55.307392   13072 ip.go:218] interface addr: fe80::a43d:dd17:5b4e:e872/64
	I0908 12:42:55.307392   13072 ip.go:218] interface addr: 172.20.48.1/20
	I0908 12:42:55.316388   13072 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0908 12:42:55.324352   13072 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 12:42:55.350780   13072 mustload.go:65] Loading cluster: multinode-818700
	I0908 12:42:55.351474   13072 config.go:182] Loaded profile config "multinode-818700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:42:55.352251   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:42:57.477666   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:42:57.477666   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:57.478070   13072 host.go:66] Checking if "multinode-818700" exists ...
	I0908 12:42:57.478989   13072 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700 for IP: 172.20.54.47
	I0908 12:42:57.479112   13072 certs.go:194] generating shared ca certs ...
	I0908 12:42:57.479167   13072 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:42:57.479167   13072 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0908 12:42:57.479984   13072 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0908 12:42:57.479984   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0908 12:42:57.480524   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0908 12:42:57.480844   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0908 12:42:57.480993   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0908 12:42:57.481615   13072 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem (1338 bytes)
	W0908 12:42:57.481973   13072 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628_empty.pem, impossibly tiny 0 bytes
	I0908 12:42:57.482089   13072 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0908 12:42:57.482456   13072 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0908 12:42:57.482792   13072 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0908 12:42:57.482977   13072 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1671 bytes)
	I0908 12:42:57.483655   13072 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem (1708 bytes)
	I0908 12:42:57.483981   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> /usr/share/ca-certificates/116282.pem
	I0908 12:42:57.484227   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:42:57.484384   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem -> /usr/share/ca-certificates/11628.pem
	I0908 12:42:57.484384   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 12:42:57.549296   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0908 12:42:57.603656   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 12:42:57.656348   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0908 12:42:57.719343   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem --> /usr/share/ca-certificates/116282.pem (1708 bytes)
	I0908 12:42:57.771061   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 12:42:57.824925   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem --> /usr/share/ca-certificates/11628.pem (1338 bytes)
	I0908 12:42:57.886853   13072 ssh_runner.go:195] Run: openssl version
	I0908 12:42:57.907285   13072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116282.pem && ln -fs /usr/share/ca-certificates/116282.pem /etc/ssl/certs/116282.pem"
	I0908 12:42:57.937457   13072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116282.pem
	I0908 12:42:57.945585   13072 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 10:54 /usr/share/ca-certificates/116282.pem
	I0908 12:42:57.956386   13072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116282.pem
	I0908 12:42:57.977131   13072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116282.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 12:42:58.009298   13072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 12:42:58.044869   13072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:42:58.054553   13072 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:42:58.070902   13072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:42:58.095219   13072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 12:42:58.132705   13072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11628.pem && ln -fs /usr/share/ca-certificates/11628.pem /etc/ssl/certs/11628.pem"
	I0908 12:42:58.164961   13072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11628.pem
	I0908 12:42:58.173435   13072 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 10:54 /usr/share/ca-certificates/11628.pem
	I0908 12:42:58.183743   13072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11628.pem
	I0908 12:42:58.204082   13072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11628.pem /etc/ssl/certs/51391683.0"
	I0908 12:42:58.236755   13072 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 12:42:58.242801   13072 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0908 12:42:58.242801   13072 kubeadm.go:926] updating node {m02 172.20.54.47 8443 v1.34.0 docker false true} ...
	I0908 12:42:58.242801   13072 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-818700-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.54.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:multinode-818700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 12:42:58.253673   13072 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 12:42:58.277066   13072 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 12:42:58.288891   13072 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0908 12:42:58.308125   13072 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0908 12:42:58.346573   13072 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 12:42:58.394915   13072 ssh_runner.go:195] Run: grep 172.20.59.7	control-plane.minikube.internal$ /etc/hosts
	I0908 12:42:58.401165   13072 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.59.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 12:42:58.437207   13072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:42:58.662492   13072 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:42:58.715977   13072 host.go:66] Checking if "multinode-818700" exists ...
	I0908 12:42:58.716956   13072 start.go:317] joinCluster: &{Name:multinode-818700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
0 ClusterName:multinode-818700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.59.7 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.54.47 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.63.150 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:42:58.717185   13072 start.go:330] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.20.54.47 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0908 12:42:58.717185   13072 host.go:66] Checking if "multinode-818700-m02" exists ...
	I0908 12:42:58.717432   13072 mustload.go:65] Loading cluster: multinode-818700
	I0908 12:42:58.718333   13072 config.go:182] Loaded profile config "multinode-818700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:42:58.718887   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:43:00.841661   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:43:00.841661   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:43:00.841661   13072 host.go:66] Checking if "multinode-818700" exists ...
	I0908 12:43:00.841661   13072 api_server.go:166] Checking apiserver status ...
	I0908 12:43:00.853435   13072 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:43:00.853435   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:43:02.973100   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:43:02.973100   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:43:02.974188   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-818700" : exit status 1
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-818700
multinode_test.go:331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node list -p multinode-818700: context deadline exceeded (0s)
multinode_test.go:333: failed to run node list. args "out/minikube-windows-amd64.exe node list -p multinode-818700" : context deadline exceeded
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-818700	172.20.50.55
multinode-818700-m02	172.20.62.186
multinode-818700-m03	172.20.63.150

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-818700 -n multinode-818700
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-818700 -n multinode-818700: (11.8046398s)
helpers_test.go:252: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 logs -n 25: (8.3673868s)
helpers_test.go:260: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                           ARGS                                                                                            │     PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ multinode-818700 cp testdata\cp-test.txt multinode-818700-m02:/home/docker/cp-test.txt                                                                                                    │ multinode-818700 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:28 UTC │ 08 Sep 25 12:28 UTC │
	│ ssh     │ multinode-818700 ssh -n multinode-818700-m02 sudo cat /home/docker/cp-test.txt                                                                                                            │ multinode-818700 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:28 UTC │ 08 Sep 25 12:28 UTC │
	│ cp      │ multinode-818700 cp multinode-818700-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile256537690\001\cp-test_multinode-818700-m02.txt │ multinode-818700 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:28 UTC │ 08 Sep 25 12:28 UTC │
	│ ssh     │ multinode-818700 ssh -n multinode-818700-m02 sudo cat /home/docker/cp-test.txt                                                                                                            │ multinode-818700 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:28 UTC │ 08 Sep 25 12:28 UTC │
	│ cp      │ multinode-818700 cp multinode-818700-m02:/home/docker/cp-test.txt multinode-818700:/home/docker/cp-test_multinode-818700-m02_multinode-818700.txt                                         │ multinode-818700 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:28 UTC │ 08 Sep 25 12:29 UTC │
	│ ssh     │ multinode-818700 ssh -n multinode-818700-m02 sudo cat /home/docker/cp-test.txt                                                                                                            │ multinode-818700 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:29 UTC │ 08 Sep 25 12:29 UTC │
	│ ssh     │ multinode-818700 ssh -n multinode-818700 sudo cat /home/docker/cp-test_multinode-818700-m02_multinode-818700.txt                                                                          │ multinode-818700 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:29 UTC │ 08 Sep 25 12:29 UTC │
	│ cp      │ multinode-818700 cp multinode-818700-m02:/home/docker/cp-test.txt multinode-818700-m03:/home/docker/cp-test_multinode-818700-m02_multinode-818700-m03.txt                                 │ multinode-818700 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:29 UTC │ 08 Sep 25 12:29 UTC │
	│ ssh     │ multinode-818700 ssh -n multinode-818700-m02 sudo cat /home/docker/cp-test.txt                                                                                                            │ multinode-818700 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:29 UTC │ 08 Sep 25 12:29 UTC │
	│ ssh     │ multinode-818700 ssh -n multinode-818700-m03 sudo cat /home/docker/cp-test_multinode-818700-m02_multinode-818700-m03.txt                                                                  │ multinode-818700 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:29 UTC │ 08 Sep 25 12:30 UTC │
	│ cp      │ multinode-818700 cp testdata\cp-test.txt multinode-818700-m03:/home/docker/cp-test.txt                                                                                                    │ multinode-818700 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:30 UTC │ 08 Sep 25 12:30 UTC │
	│ ssh     │ multinode-818700 ssh -n multinode-818700-m03 sudo cat /home/docker/cp-test.txt                                                                                                            │ multinode-818700 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:30 UTC │ 08 Sep 25 12:30 UTC │
	│ cp      │ multinode-818700 cp multinode-818700-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile256537690\001\cp-test_multinode-818700-m03.txt │ multinode-818700 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:30 UTC │ 08 Sep 25 12:30 UTC │
	│ ssh     │ multinode-818700 ssh -n multinode-818700-m03 sudo cat /home/docker/cp-test.txt                                                                                                            │ multinode-818700 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:30 UTC │ 08 Sep 25 12:30 UTC │
	│ cp      │ multinode-818700 cp multinode-818700-m03:/home/docker/cp-test.txt multinode-818700:/home/docker/cp-test_multinode-818700-m03_multinode-818700.txt                                         │ multinode-818700 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:30 UTC │ 08 Sep 25 12:31 UTC │
	│ ssh     │ multinode-818700 ssh -n multinode-818700-m03 sudo cat /home/docker/cp-test.txt                                                                                                            │ multinode-818700 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:31 UTC │ 08 Sep 25 12:31 UTC │
	│ ssh     │ multinode-818700 ssh -n multinode-818700 sudo cat /home/docker/cp-test_multinode-818700-m03_multinode-818700.txt                                                                          │ multinode-818700 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:31 UTC │ 08 Sep 25 12:31 UTC │
	│ cp      │ multinode-818700 cp multinode-818700-m03:/home/docker/cp-test.txt multinode-818700-m02:/home/docker/cp-test_multinode-818700-m03_multinode-818700-m02.txt                                 │ multinode-818700 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:31 UTC │ 08 Sep 25 12:31 UTC │
	│ ssh     │ multinode-818700 ssh -n multinode-818700-m03 sudo cat /home/docker/cp-test.txt                                                                                                            │ multinode-818700 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:31 UTC │ 08 Sep 25 12:31 UTC │
	│ ssh     │ multinode-818700 ssh -n multinode-818700-m02 sudo cat /home/docker/cp-test_multinode-818700-m03_multinode-818700-m02.txt                                                                  │ multinode-818700 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:31 UTC │ 08 Sep 25 12:31 UTC │
	│ node    │ multinode-818700 node stop m03                                                                                                                                                            │ multinode-818700 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:31 UTC │ 08 Sep 25 12:32 UTC │
	│ node    │ multinode-818700 node start m03 -v=5 --alsologtostderr                                                                                                                                    │ multinode-818700 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:33 UTC │ 08 Sep 25 12:35 UTC │
	│ node    │ list -p multinode-818700                                                                                                                                                                  │ multinode-818700 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │                     │
	│ stop    │ -p multinode-818700                                                                                                                                                                       │ multinode-818700 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:38 UTC │
	│ start   │ -p multinode-818700 --wait=true -v=5 --alsologtostderr                                                                                                                                    │ multinode-818700 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 12:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 12:38:01
	Running on machine: minikube6
	Binary: Built with gc go1.24.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 12:38:01.609371   13072 out.go:360] Setting OutFile to fd 2044 ...
	I0908 12:38:01.692836   13072 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:38:01.692836   13072 out.go:374] Setting ErrFile to fd 2016...
	I0908 12:38:01.692836   13072 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:38:01.711255   13072 out.go:368] Setting JSON to false
	I0908 12:38:01.716512   13072 start.go:130] hostinfo: {"hostname":"minikube6","uptime":303933,"bootTime":1757031148,"procs":182,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6282 Build 19045.6282","kernelVersion":"10.0.19045.6282 Build 19045.6282","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0908 12:38:01.716512   13072 start.go:138] gopshost.Virtualization returned error: not implemented yet
	I0908 12:38:01.846989   13072 out.go:179] * [multinode-818700] minikube v1.36.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
	I0908 12:38:01.929616   13072 notify.go:220] Checking for updates...
	I0908 12:38:01.950624   13072 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0908 12:38:02.036032   13072 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 12:38:02.095942   13072 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0908 12:38:02.102175   13072 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 12:38:02.134779   13072 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 12:38:02.143467   13072 config.go:182] Loaded profile config "multinode-818700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:38:02.143467   13072 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 12:38:07.485139   13072 out.go:179] * Using the hyperv driver based on existing profile
	I0908 12:38:07.541014   13072 start.go:304] selected driver: hyperv
	I0908 12:38:07.541072   13072 start.go:918] validating driver "hyperv" against &{Name:multinode-818700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.34.0 ClusterName:multinode-818700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.50.55 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.62.186 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.63.150 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false i
ngress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:38:07.541072   13072 start.go:929] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 12:38:07.595483   13072 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 12:38:07.595483   13072 cni.go:84] Creating CNI manager for ""
	I0908 12:38:07.595483   13072 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0908 12:38:07.596130   13072 start.go:348] cluster config:
	{Name:multinode-818700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:multinode-818700 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.50.55 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.62.186 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.63.150 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-pr
ovisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:38:07.596491   13072 iso.go:125] acquiring lock: {Name:mk0c8af595f03ef7f7ea249099688f084dfd77f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 12:38:07.609040   13072 out.go:179] * Starting "multinode-818700" primary control-plane node in "multinode-818700" cluster
	I0908 12:38:07.615821   13072 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0908 12:38:07.615821   13072 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0908 12:38:07.615821   13072 cache.go:58] Caching tarball of preloaded images
	I0908 12:38:07.615821   13072 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0908 12:38:07.615821   13072 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0908 12:38:07.615821   13072 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\config.json ...
	I0908 12:38:07.620038   13072 start.go:360] acquireMachinesLock for multinode-818700: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0908 12:38:07.620243   13072 start.go:364] duration metric: took 205.3µs to acquireMachinesLock for "multinode-818700"
	I0908 12:38:07.620737   13072 start.go:96] Skipping create...Using existing machine configuration
	I0908 12:38:07.620737   13072 fix.go:54] fixHost starting: 
	I0908 12:38:07.621553   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:38:10.276640   13072 main.go:141] libmachine: [stdout =====>] : Off
	
	I0908 12:38:10.277405   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:10.277405   13072 fix.go:112] recreateIfNeeded on multinode-818700: state=Stopped err=<nil>
	W0908 12:38:10.277405   13072 fix.go:138] unexpected machine state, will restart: <nil>
	I0908 12:38:10.284458   13072 out.go:252] * Restarting existing hyperv VM for "multinode-818700" ...
	I0908 12:38:10.284458   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-818700
	I0908 12:38:13.306249   13072 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:38:13.307279   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:13.307279   13072 main.go:141] libmachine: Waiting for host to start...
	I0908 12:38:13.307390   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:38:15.565191   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:38:15.565191   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:15.565447   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:38:18.015920   13072 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:38:18.015920   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:19.017102   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:38:21.174693   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:38:21.174693   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:21.175566   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:38:23.826969   13072 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:38:23.826969   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:24.827793   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:38:27.030539   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:38:27.030539   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:27.030683   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:38:29.609925   13072 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:38:29.610108   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:30.611071   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:38:32.810459   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:38:32.811453   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:32.811681   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:38:35.338447   13072 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:38:35.338447   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:36.339407   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:38:38.469302   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:38:38.470082   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:38.470428   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:38:40.838773   13072 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:38:40.838773   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:41.840141   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:38:44.010454   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:38:44.010454   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:44.011157   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:38:46.517691   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:38:46.517829   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:46.520770   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:38:48.581136   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:38:48.581345   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:48.581345   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:38:50.999063   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:38:50.999063   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:50.999063   13072 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\config.json ...
	I0908 12:38:51.004775   13072 machine.go:93] provisionDockerMachine start ...
	I0908 12:38:51.004775   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:38:53.093737   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:38:53.093737   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:53.094750   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:38:55.594798   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:38:55.594798   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:55.600673   13072 main.go:141] libmachine: Using SSH client type: native
	I0908 12:38:55.601478   13072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.59.7 22 <nil> <nil>}
	I0908 12:38:55.601478   13072 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 12:38:55.739535   13072 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0908 12:38:55.739622   13072 buildroot.go:166] provisioning hostname "multinode-818700"
	I0908 12:38:55.739686   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:38:57.784326   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:38:57.784919   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:38:57.784919   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:39:00.317323   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:39:00.317323   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:00.323466   13072 main.go:141] libmachine: Using SSH client type: native
	I0908 12:39:00.324073   13072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.59.7 22 <nil> <nil>}
	I0908 12:39:00.324073   13072 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-818700 && echo "multinode-818700" | sudo tee /etc/hostname
	I0908 12:39:00.493999   13072 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-818700
	
	I0908 12:39:00.494119   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:39:02.605605   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:39:02.605699   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:02.605766   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:39:05.116295   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:39:05.117202   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:05.123103   13072 main.go:141] libmachine: Using SSH client type: native
	I0908 12:39:05.123804   13072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.59.7 22 <nil> <nil>}
	I0908 12:39:05.123804   13072 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-818700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-818700/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-818700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 12:39:05.284542   13072 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 12:39:05.284598   13072 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0908 12:39:05.284716   13072 buildroot.go:174] setting up certificates
	I0908 12:39:05.284748   13072 provision.go:84] configureAuth start
	I0908 12:39:05.284775   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:39:07.350197   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:39:07.350197   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:07.350197   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:39:09.763695   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:39:09.764664   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:09.764664   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:39:11.758974   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:39:11.759082   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:11.759082   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:39:14.218149   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:39:14.218190   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:14.218190   13072 provision.go:143] copyHostCerts
	I0908 12:39:14.218190   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0908 12:39:14.218887   13072 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0908 12:39:14.218946   13072 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0908 12:39:14.219094   13072 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0908 12:39:14.220684   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0908 12:39:14.221213   13072 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0908 12:39:14.221292   13072 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0908 12:39:14.221292   13072 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0908 12:39:14.222835   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0908 12:39:14.222835   13072 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0908 12:39:14.222835   13072 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0908 12:39:14.223634   13072 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1671 bytes)
	I0908 12:39:14.224342   13072 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-818700 san=[127.0.0.1 172.20.59.7 localhost minikube multinode-818700]
	I0908 12:39:15.272739   13072 provision.go:177] copyRemoteCerts
	I0908 12:39:15.283735   13072 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 12:39:15.283735   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:39:17.264376   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:39:17.264376   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:17.265073   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:39:19.696082   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:39:19.696082   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:19.696632   13072 sshutil.go:53] new ssh client: &{IP:172.20.59.7 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700\id_rsa Username:docker}
	I0908 12:39:19.812688   13072 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.528825s)
	I0908 12:39:19.812810   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0908 12:39:19.813025   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 12:39:19.866024   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0908 12:39:19.866146   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0908 12:39:19.920246   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0908 12:39:19.920994   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 12:39:19.983210   13072 provision.go:87] duration metric: took 14.6982062s to configureAuth
	I0908 12:39:19.983387   13072 buildroot.go:189] setting minikube options for container-runtime
	I0908 12:39:19.984081   13072 config.go:182] Loaded profile config "multinode-818700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:39:19.984081   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:39:22.117502   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:39:22.117502   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:22.118079   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:39:24.586114   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:39:24.586114   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:24.591388   13072 main.go:141] libmachine: Using SSH client type: native
	I0908 12:39:24.591920   13072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.59.7 22 <nil> <nil>}
	I0908 12:39:24.591920   13072 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0908 12:39:24.754929   13072 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0908 12:39:24.754992   13072 buildroot.go:70] root file system type: tmpfs
	I0908 12:39:24.755083   13072 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0908 12:39:24.755083   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:39:26.840715   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:39:26.840715   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:26.840715   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:39:29.433909   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:39:29.433909   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:29.440401   13072 main.go:141] libmachine: Using SSH client type: native
	I0908 12:39:29.440733   13072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.59.7 22 <nil> <nil>}
	I0908 12:39:29.440733   13072 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0908 12:39:29.601362   13072 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0908 12:39:29.602013   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:39:31.642980   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:39:31.642980   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:31.643523   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:39:34.193694   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:39:34.193694   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:34.201447   13072 main.go:141] libmachine: Using SSH client type: native
	I0908 12:39:34.201647   13072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.59.7 22 <nil> <nil>}
	I0908 12:39:34.201647   13072 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0908 12:39:35.845221   13072 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I0908 12:39:35.845221   13072 machine.go:96] duration metric: took 44.8398809s to provisionDockerMachine
	I0908 12:39:35.845221   13072 start.go:293] postStartSetup for "multinode-818700" (driver="hyperv")
	I0908 12:39:35.845221   13072 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 12:39:35.857524   13072 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 12:39:35.857524   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:39:37.898074   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:39:37.898297   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:37.898297   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:39:40.366725   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:39:40.366725   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:40.367546   13072 sshutil.go:53] new ssh client: &{IP:172.20.59.7 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700\id_rsa Username:docker}
	I0908 12:39:40.489516   13072 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6319341s)
	I0908 12:39:40.502061   13072 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 12:39:40.509517   13072 info.go:137] Remote host: Buildroot 2025.02
	I0908 12:39:40.509517   13072 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0908 12:39:40.510080   13072 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0908 12:39:40.511208   13072 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> 116282.pem in /etc/ssl/certs
	I0908 12:39:40.511381   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> /etc/ssl/certs/116282.pem
	I0908 12:39:40.522621   13072 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 12:39:40.542030   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem --> /etc/ssl/certs/116282.pem (1708 bytes)
	I0908 12:39:40.594449   13072 start.go:296] duration metric: took 4.7491682s for postStartSetup
	I0908 12:39:40.594449   13072 fix.go:56] duration metric: took 1m32.97254s for fixHost
	I0908 12:39:40.594449   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:39:42.633699   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:39:42.633699   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:42.634513   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:39:45.259225   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:39:45.259225   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:45.265028   13072 main.go:141] libmachine: Using SSH client type: native
	I0908 12:39:45.265833   13072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.59.7 22 <nil> <nil>}
	I0908 12:39:45.265833   13072 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0908 12:39:45.408187   13072 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757335185.425807946
	
	I0908 12:39:45.408187   13072 fix.go:216] guest clock: 1757335185.425807946
	I0908 12:39:45.408187   13072 fix.go:229] Guest: 2025-09-08 12:39:45.425807946 +0000 UTC Remote: 2025-09-08 12:39:40.5944494 +0000 UTC m=+99.087765001 (delta=4.831358546s)
	I0908 12:39:45.408187   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:39:47.448003   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:39:47.448702   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:47.449268   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:39:49.868797   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:39:49.868797   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:49.873942   13072 main.go:141] libmachine: Using SSH client type: native
	I0908 12:39:49.874930   13072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.59.7 22 <nil> <nil>}
	I0908 12:39:49.874930   13072 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1757335185
	I0908 12:39:50.036490   13072 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Sep  8 12:39:45 UTC 2025
	
	I0908 12:39:50.036490   13072 fix.go:236] clock set: Mon Sep  8 12:39:45 UTC 2025
	 (err=<nil>)
	I0908 12:39:50.036490   13072 start.go:83] releasing machines lock for "multinode-818700", held for 1m42.414956s
	I0908 12:39:50.036490   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:39:52.112588   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:39:52.113221   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:52.113345   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:39:54.656827   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:39:54.656827   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:54.660938   13072 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0908 12:39:54.661061   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:39:54.671561   13072 ssh_runner.go:195] Run: cat /version.json
	I0908 12:39:54.671654   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:39:56.831923   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:39:56.831923   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:56.831923   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:39:56.831923   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:39:56.831923   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:56.832478   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:39:59.467137   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:39:59.467137   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:59.468013   13072 sshutil.go:53] new ssh client: &{IP:172.20.59.7 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700\id_rsa Username:docker}
	I0908 12:39:59.497104   13072 main.go:141] libmachine: [stdout =====>] : 172.20.59.7
	
	I0908 12:39:59.497456   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:39:59.497835   13072 sshutil.go:53] new ssh client: &{IP:172.20.59.7 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700\id_rsa Username:docker}
	I0908 12:39:59.559215   13072 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8982152s)
	W0908 12:39:59.559300   13072 start.go:868] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0908 12:39:59.593802   13072 ssh_runner.go:235] Completed: cat /version.json: (4.9221789s)
	I0908 12:39:59.605415   13072 ssh_runner.go:195] Run: systemctl --version
	I0908 12:39:59.626758   13072 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0908 12:39:59.637601   13072 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0908 12:39:59.648690   13072 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 12:39:59.681308   13072 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0908 12:39:59.681345   13072 start.go:495] detecting cgroup driver to use...
	I0908 12:39:59.681662   13072 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0908 12:39:59.730084   13072 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0908 12:39:59.730084   13072 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0908 12:39:59.753395   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0908 12:39:59.791282   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0908 12:39:59.812233   13072 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0908 12:39:59.824232   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0908 12:39:59.857037   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 12:39:59.888063   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0908 12:39:59.920569   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 12:39:59.952975   13072 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 12:39:59.988850   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0908 12:40:00.023200   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0908 12:40:00.059143   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0908 12:40:00.094286   13072 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 12:40:00.112906   13072 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0908 12:40:00.124631   13072 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0908 12:40:00.155119   13072 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 12:40:00.186151   13072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:40:00.430601   13072 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0908 12:40:00.500975   13072 start.go:495] detecting cgroup driver to use...
	I0908 12:40:00.510554   13072 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0908 12:40:00.556054   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 12:40:00.591647   13072 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 12:40:00.638099   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 12:40:00.671147   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 12:40:00.706101   13072 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0908 12:40:00.772892   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 12:40:00.799354   13072 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 12:40:00.846910   13072 ssh_runner.go:195] Run: which cri-dockerd
	I0908 12:40:00.866010   13072 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0908 12:40:00.885595   13072 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0908 12:40:00.932179   13072 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0908 12:40:01.158756   13072 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0908 12:40:01.383317   13072 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0908 12:40:01.383680   13072 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0908 12:40:01.432180   13072 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0908 12:40:01.467193   13072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:40:01.696647   13072 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0908 12:40:02.540518   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 12:40:02.577126   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0908 12:40:02.612325   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0908 12:40:02.648945   13072 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0908 12:40:02.875857   13072 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0908 12:40:03.110339   13072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:40:03.347129   13072 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0908 12:40:03.414120   13072 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0908 12:40:03.450822   13072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:40:03.685990   13072 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0908 12:40:03.849661   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0908 12:40:03.872329   13072 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0908 12:40:03.883568   13072 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0908 12:40:03.891848   13072 start.go:563] Will wait 60s for crictl version
	I0908 12:40:03.903165   13072 ssh_runner.go:195] Run: which crictl
	I0908 12:40:03.919122   13072 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 12:40:03.974682   13072 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0908 12:40:03.983714   13072 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0908 12:40:04.025889   13072 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0908 12:40:04.066105   13072 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0908 12:40:04.066143   13072 ip.go:180] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0908 12:40:04.070759   13072 ip.go:194] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0908 12:40:04.070759   13072 ip.go:194] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0908 12:40:04.070759   13072 ip.go:189] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0908 12:40:04.070759   13072 ip.go:215] Found interface: {Index:17 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4f:5e:c2 Flags:up|broadcast|multicast|running}
	I0908 12:40:04.074255   13072 ip.go:218] interface addr: fe80::a43d:dd17:5b4e:e872/64
	I0908 12:40:04.074255   13072 ip.go:218] interface addr: 172.20.48.1/20
	I0908 12:40:04.084128   13072 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0908 12:40:04.091158   13072 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 12:40:04.114381   13072 kubeadm.go:875] updating cluster {Name:multinode-818700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.34.0 ClusterName:multinode-818700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.59.7 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.62.186 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.63.150 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 12:40:04.114381   13072 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0908 12:40:04.124429   13072 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0908 12:40:04.155894   13072 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	kindest/kindnetd:v20250512-df8de77b
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0908 12:40:04.155894   13072 docker.go:621] Images already preloaded, skipping extraction
	I0908 12:40:04.163761   13072 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0908 12:40:04.186783   13072 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	kindest/kindnetd:v20250512-df8de77b
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0908 12:40:04.186783   13072 cache_images.go:85] Images are preloaded, skipping loading
	I0908 12:40:04.186783   13072 kubeadm.go:926] updating node { 172.20.59.7 8443 v1.34.0 docker true true} ...
	I0908 12:40:04.187752   13072 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-818700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.59.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:multinode-818700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 12:40:04.195782   13072 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0908 12:40:04.267110   13072 cni.go:84] Creating CNI manager for ""
	I0908 12:40:04.267110   13072 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0908 12:40:04.267110   13072 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 12:40:04.267110   13072 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.59.7 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-818700 NodeName:multinode-818700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.59.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.59.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 12:40:04.267110   13072 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.59.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-818700"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.20.59.7"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.59.7"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 12:40:04.278029   13072 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 12:40:04.301927   13072 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 12:40:04.312957   13072 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 12:40:04.333290   13072 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0908 12:40:04.368859   13072 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 12:40:04.402521   13072 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I0908 12:40:04.455352   13072 ssh_runner.go:195] Run: grep 172.20.59.7	control-plane.minikube.internal$ /etc/hosts
	I0908 12:40:04.461771   13072 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.59.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 12:40:04.498498   13072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:40:04.744131   13072 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:40:04.783453   13072 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700 for IP: 172.20.59.7
	I0908 12:40:04.783527   13072 certs.go:194] generating shared ca certs ...
	I0908 12:40:04.783527   13072 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:40:04.784489   13072 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0908 12:40:04.784900   13072 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0908 12:40:04.784900   13072 certs.go:256] generating profile certs ...
	I0908 12:40:04.785767   13072 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\client.key
	I0908 12:40:04.785767   13072 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.key.1fdd56c4
	I0908 12:40:04.785767   13072 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.crt.1fdd56c4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.59.7]
	I0908 12:40:04.972131   13072 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.crt.1fdd56c4 ...
	I0908 12:40:04.972131   13072 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.crt.1fdd56c4: {Name:mkf02d81f3a64226491daaedb867425cb601c513 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:40:04.974105   13072 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.key.1fdd56c4 ...
	I0908 12:40:04.974105   13072 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.key.1fdd56c4: {Name:mk33ea48fd7cabb154abff9d71d34b0131ffcb1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:40:04.975121   13072 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.crt.1fdd56c4 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.crt
	I0908 12:40:04.991100   13072 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.key.1fdd56c4 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.key
	I0908 12:40:04.992096   13072 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\proxy-client.key
	I0908 12:40:04.992096   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0908 12:40:04.992845   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0908 12:40:04.993130   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0908 12:40:04.993248   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0908 12:40:04.993248   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0908 12:40:04.993248   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0908 12:40:04.993816   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0908 12:40:04.993877   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0908 12:40:04.993877   13072 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem (1338 bytes)
	W0908 12:40:04.993877   13072 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628_empty.pem, impossibly tiny 0 bytes
	I0908 12:40:04.993877   13072 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0908 12:40:04.994833   13072 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0908 12:40:04.994833   13072 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0908 12:40:04.994833   13072 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1671 bytes)
	I0908 12:40:04.995877   13072 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem (1708 bytes)
	I0908 12:40:04.995877   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> /usr/share/ca-certificates/116282.pem
	I0908 12:40:04.995877   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:40:04.996506   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem -> /usr/share/ca-certificates/11628.pem
	I0908 12:40:04.997762   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 12:40:05.057751   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0908 12:40:05.114963   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 12:40:05.166146   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0908 12:40:05.220827   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0908 12:40:05.269954   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 12:40:05.319080   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 12:40:05.375087   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 12:40:05.424402   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem --> /usr/share/ca-certificates/116282.pem (1708 bytes)
	I0908 12:40:05.475078   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 12:40:05.526604   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem --> /usr/share/ca-certificates/11628.pem (1338 bytes)
	I0908 12:40:05.579086   13072 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 12:40:05.623528   13072 ssh_runner.go:195] Run: openssl version
	I0908 12:40:05.642660   13072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 12:40:05.673936   13072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:40:05.681087   13072 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:40:05.691417   13072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:40:05.711145   13072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 12:40:05.738334   13072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11628.pem && ln -fs /usr/share/ca-certificates/11628.pem /etc/ssl/certs/11628.pem"
	I0908 12:40:05.773240   13072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11628.pem
	I0908 12:40:05.781117   13072 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 10:54 /usr/share/ca-certificates/11628.pem
	I0908 12:40:05.791901   13072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11628.pem
	I0908 12:40:05.814750   13072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11628.pem /etc/ssl/certs/51391683.0"
	I0908 12:40:05.855413   13072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116282.pem && ln -fs /usr/share/ca-certificates/116282.pem /etc/ssl/certs/116282.pem"
	I0908 12:40:05.887694   13072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116282.pem
	I0908 12:40:05.895785   13072 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 10:54 /usr/share/ca-certificates/116282.pem
	I0908 12:40:05.907249   13072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116282.pem
	I0908 12:40:05.928762   13072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116282.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 12:40:05.963369   13072 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 12:40:05.981490   13072 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 12:40:06.003106   13072 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 12:40:06.025167   13072 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 12:40:06.047011   13072 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 12:40:06.068293   13072 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 12:40:06.090004   13072 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 12:40:06.102240   13072 kubeadm.go:392] StartCluster: {Name:multinode-818700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
4.0 ClusterName:multinode-818700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.59.7 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.62.186 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.63.150 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false in
gress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:40:06.110639   13072 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0908 12:40:06.152241   13072 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 12:40:06.175947   13072 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0908 12:40:06.175947   13072 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0908 12:40:06.186834   13072 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0908 12:40:06.206845   13072 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0908 12:40:06.206845   13072 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-818700" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0908 12:40:06.206845   13072 kubeconfig.go:62] C:\Users\jenkins.minikube6\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-818700" cluster setting kubeconfig missing "multinode-818700" context setting]
	I0908 12:40:06.206845   13072 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:40:06.231365   13072 kapi.go:59] client config for multinode-818700: &rest.Config{Host:"https://172.20.59.7:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-818700/client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-818700/client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2a967c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0908 12:40:06.232788   13072 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0908 12:40:06.232788   13072 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0908 12:40:06.232788   13072 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0908 12:40:06.232788   13072 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0908 12:40:06.232788   13072 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0908 12:40:06.232788   13072 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0908 12:40:06.246485   13072 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0908 12:40:06.266002   13072 kubeadm.go:636] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.20.50.55
	+  advertiseAddress: 172.20.59.7
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -15,13 +15,13 @@
	   name: "multinode-818700"
	   kubeletExtraArgs:
	     - name: "node-ip"
	-      value: "172.20.50.55"
	+      value: "172.20.59.7"
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.20.50.55"]
	+  certSANs: ["127.0.0.1", "localhost", "172.20.59.7"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	       value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	
	-- /stdout --
	I0908 12:40:06.266002   13072 kubeadm.go:1152] stopping kube-system containers ...
	I0908 12:40:06.274035   13072 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0908 12:40:06.301860   13072 docker.go:484] Stopping containers: [4b397652bed6 51939f01ba77 da2be864ea38 21dbed80ecd5 0e97a2b4abd9 a793eb6b8d63 cf6168c36a0f 7e1c24e28ed9 4ef5a92069c2 07ac3a29d931 3ae48749732c 19b41e0f8bcf e229cb205b5d a817a17208da 62d71a5295d0 1b891663f1f6]
	I0908 12:40:06.311621   13072 ssh_runner.go:195] Run: docker stop 4b397652bed6 51939f01ba77 da2be864ea38 21dbed80ecd5 0e97a2b4abd9 a793eb6b8d63 cf6168c36a0f 7e1c24e28ed9 4ef5a92069c2 07ac3a29d931 3ae48749732c 19b41e0f8bcf e229cb205b5d a817a17208da 62d71a5295d0 1b891663f1f6
	I0908 12:40:06.353028   13072 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0908 12:40:06.390869   13072 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 12:40:06.410494   13072 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 12:40:06.410494   13072 kubeadm.go:157] found existing configuration files:
	
	I0908 12:40:06.420115   13072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 12:40:06.438809   13072 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 12:40:06.449433   13072 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 12:40:06.477036   13072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 12:40:06.497007   13072 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 12:40:06.508033   13072 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 12:40:06.538974   13072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 12:40:06.559665   13072 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 12:40:06.571328   13072 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 12:40:06.606530   13072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 12:40:06.626203   13072 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 12:40:06.637350   13072 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 12:40:06.669099   13072 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 12:40:06.694075   13072 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 12:40:07.036307   13072 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 12:40:08.669997   13072 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.6335155s)
	I0908 12:40:08.670184   13072 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0908 12:40:09.059358   13072 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 12:40:09.134059   13072 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0908 12:40:09.227450   13072 api_server.go:52] waiting for apiserver process to appear ...
	I0908 12:40:09.238171   13072 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:40:09.742212   13072 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:40:10.236299   13072 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:40:10.739398   13072 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:40:11.238581   13072 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:40:11.291014   13072 api_server.go:72] duration metric: took 2.0636218s to wait for apiserver process to appear ...
	I0908 12:40:11.291101   13072 api_server.go:88] waiting for apiserver healthz status ...
	I0908 12:40:11.291101   13072 api_server.go:253] Checking apiserver healthz at https://172.20.59.7:8443/healthz ...
	I0908 12:40:15.082710   13072 api_server.go:279] https://172.20.59.7:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0908 12:40:15.082784   13072 api_server.go:103] status: https://172.20.59.7:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0908 12:40:15.082884   13072 api_server.go:253] Checking apiserver healthz at https://172.20.59.7:8443/healthz ...
	I0908 12:40:15.118179   13072 api_server.go:279] https://172.20.59.7:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0908 12:40:15.118259   13072 api_server.go:103] status: https://172.20.59.7:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0908 12:40:15.291953   13072 api_server.go:253] Checking apiserver healthz at https://172.20.59.7:8443/healthz ...
	I0908 12:40:15.306441   13072 api_server.go:279] https://172.20.59.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 12:40:15.306486   13072 api_server.go:103] status: https://172.20.59.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 12:40:15.791370   13072 api_server.go:253] Checking apiserver healthz at https://172.20.59.7:8443/healthz ...
	I0908 12:40:15.808740   13072 api_server.go:279] https://172.20.59.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 12:40:15.808740   13072 api_server.go:103] status: https://172.20.59.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 12:40:16.291801   13072 api_server.go:253] Checking apiserver healthz at https://172.20.59.7:8443/healthz ...
	I0908 12:40:16.305118   13072 api_server.go:279] https://172.20.59.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 12:40:16.305118   13072 api_server.go:103] status: https://172.20.59.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 12:40:16.791740   13072 api_server.go:253] Checking apiserver healthz at https://172.20.59.7:8443/healthz ...
	I0908 12:40:16.803012   13072 api_server.go:279] https://172.20.59.7:8443/healthz returned 200:
	ok
	I0908 12:40:16.816031   13072 api_server.go:141] control plane version: v1.34.0
	I0908 12:40:16.816031   13072 api_server.go:131] duration metric: took 5.5248601s to wait for apiserver health ...
	I0908 12:40:16.816031   13072 cni.go:84] Creating CNI manager for ""
	I0908 12:40:16.816031   13072 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0908 12:40:16.819055   13072 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0908 12:40:16.833687   13072 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0908 12:40:16.860278   13072 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0908 12:40:16.860375   13072 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0908 12:40:16.951829   13072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0908 12:40:18.157344   13072 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.2054996s)
	I0908 12:40:18.157344   13072 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 12:40:18.166008   13072 system_pods.go:59] 12 kube-system pods found
	I0908 12:40:18.166008   13072 system_pods.go:61] "coredns-66bc5c9577-svhws" [cd9b9019-0603-4fa5-8b64-d23b1f50d4fe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:40:18.166008   13072 system_pods.go:61] "etcd-multinode-818700" [cf243776-ef17-4460-ac8d-1775558b5246] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 12:40:18.166008   13072 system_pods.go:61] "kindnet-5drb9" [7645ef6c-8a22-4f86-9e96-70c0b24ea598] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0908 12:40:18.166008   13072 system_pods.go:61] "kindnet-chkc2" [114504d7-aec1-449b-9900-a9a3871cdd14] Running
	I0908 12:40:18.166008   13072 system_pods.go:61] "kindnet-jb7kv" [80bdb808-36b1-4069-9919-efcfc7cc5f4b] Running
	I0908 12:40:18.166008   13072 system_pods.go:61] "kube-apiserver-multinode-818700" [27f58fe2-88dc-41bc-9b5e-f5dd0ba551b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 12:40:18.166008   13072 system_pods.go:61] "kube-controller-manager-multinode-818700" [c0ff29cc-9c9b-46a9-a34b-1e3da19a80e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 12:40:18.166008   13072 system_pods.go:61] "kube-proxy-fb8cd" [edcbee45-3ba2-4746-9d0b-321d40a96c25] Running
	I0908 12:40:18.166215   13072 system_pods.go:61] "kube-proxy-m5ksd" [7300c145-be03-4dae-93df-7b201133bc8a] Running
	I0908 12:40:18.166215   13072 system_pods.go:61] "kube-proxy-m9smd" [d16c9eb2-1d38-4652-880b-d217ba193c1a] Running
	I0908 12:40:18.166271   13072 system_pods.go:61] "kube-scheduler-multinode-818700" [a805a7f8-5277-4087-9ccf-2f2afcc47715] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 12:40:18.166271   13072 system_pods.go:61] "storage-provisioner" [c5177fef-0793-4291-adac-1b9fa372fa06] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 12:40:18.166271   13072 system_pods.go:74] duration metric: took 8.9268ms to wait for pod list to return data ...
	I0908 12:40:18.166271   13072 node_conditions.go:102] verifying NodePressure condition ...
	I0908 12:40:18.169954   13072 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 12:40:18.170014   13072 node_conditions.go:123] node cpu capacity is 2
	I0908 12:40:18.170014   13072 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 12:40:18.170014   13072 node_conditions.go:123] node cpu capacity is 2
	I0908 12:40:18.170014   13072 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 12:40:18.170071   13072 node_conditions.go:123] node cpu capacity is 2
	I0908 12:40:18.170071   13072 node_conditions.go:105] duration metric: took 3.8007ms to run NodePressure ...
	I0908 12:40:18.170071   13072 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 12:40:18.926578   13072 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0908 12:40:18.944365   13072 kubeadm.go:735] kubelet initialised
	I0908 12:40:18.944365   13072 kubeadm.go:736] duration metric: took 17.7863ms waiting for restarted kubelet to initialise ...
	I0908 12:40:18.944365   13072 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 12:40:18.973797   13072 ops.go:34] apiserver oom_adj: -16
	I0908 12:40:18.973834   13072 kubeadm.go:593] duration metric: took 12.7977257s to restartPrimaryControlPlane
	I0908 12:40:18.973896   13072 kubeadm.go:394] duration metric: took 12.8714944s to StartCluster
	I0908 12:40:18.973934   13072 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:40:18.974225   13072 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0908 12:40:18.975921   13072 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:40:18.977816   13072 start.go:235] Will wait 6m0s for node &{Name: IP:172.20.59.7 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0908 12:40:18.977816   13072 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 12:40:18.978089   13072 config.go:182] Loaded profile config "multinode-818700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:40:18.983185   13072 out.go:179] * Enabled addons: 
	I0908 12:40:18.985894   13072 out.go:179] * Verifying Kubernetes components...
	I0908 12:40:18.988443   13072 addons.go:514] duration metric: took 10.664ms for enable addons: enabled=[]
	I0908 12:40:19.000533   13072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:40:19.328426   13072 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:40:19.362166   13072 node_ready.go:35] waiting up to 6m0s for node "multinode-818700" to be "Ready" ...
	W0908 12:40:21.367954   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:23.368753   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:25.369958   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:27.868237   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:29.869669   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:31.870381   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:33.871524   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:36.368864   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:38.870431   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:41.368050   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:43.368322   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:45.369241   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:47.868719   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:49.870431   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:52.368482   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:54.369276   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:56.867996   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	W0908 12:40:58.869035   13072 node_ready.go:57] node "multinode-818700" has "Ready":"False" status (will retry)
	I0908 12:40:59.369470   13072 node_ready.go:49] node "multinode-818700" is "Ready"
	I0908 12:40:59.369545   13072 node_ready.go:38] duration metric: took 40.0059439s for node "multinode-818700" to be "Ready" ...
	I0908 12:40:59.369618   13072 api_server.go:52] waiting for apiserver process to appear ...
	I0908 12:40:59.380718   13072 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:40:59.426156   13072 api_server.go:72] duration metric: took 40.4477113s to wait for apiserver process to appear ...
	I0908 12:40:59.426156   13072 api_server.go:88] waiting for apiserver healthz status ...
	I0908 12:40:59.426156   13072 api_server.go:253] Checking apiserver healthz at https://172.20.59.7:8443/healthz ...
	I0908 12:40:59.434287   13072 api_server.go:279] https://172.20.59.7:8443/healthz returned 200:
	ok
	I0908 12:40:59.436343   13072 api_server.go:141] control plane version: v1.34.0
	I0908 12:40:59.436343   13072 api_server.go:131] duration metric: took 10.1871ms to wait for apiserver health ...
	I0908 12:40:59.436343   13072 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 12:40:59.446298   13072 system_pods.go:59] 12 kube-system pods found
	I0908 12:40:59.446298   13072 system_pods.go:61] "coredns-66bc5c9577-svhws" [cd9b9019-0603-4fa5-8b64-d23b1f50d4fe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:40:59.446298   13072 system_pods.go:61] "etcd-multinode-818700" [cf243776-ef17-4460-ac8d-1775558b5246] Running
	I0908 12:40:59.446298   13072 system_pods.go:61] "kindnet-5drb9" [7645ef6c-8a22-4f86-9e96-70c0b24ea598] Running
	I0908 12:40:59.446298   13072 system_pods.go:61] "kindnet-chkc2" [114504d7-aec1-449b-9900-a9a3871cdd14] Running
	I0908 12:40:59.446298   13072 system_pods.go:61] "kindnet-jb7kv" [80bdb808-36b1-4069-9919-efcfc7cc5f4b] Running
	I0908 12:40:59.446298   13072 system_pods.go:61] "kube-apiserver-multinode-818700" [27f58fe2-88dc-41bc-9b5e-f5dd0ba551b7] Running
	I0908 12:40:59.446298   13072 system_pods.go:61] "kube-controller-manager-multinode-818700" [c0ff29cc-9c9b-46a9-a34b-1e3da19a80e2] Running
	I0908 12:40:59.446298   13072 system_pods.go:61] "kube-proxy-fb8cd" [edcbee45-3ba2-4746-9d0b-321d40a96c25] Running
	I0908 12:40:59.446298   13072 system_pods.go:61] "kube-proxy-m5ksd" [7300c145-be03-4dae-93df-7b201133bc8a] Running
	I0908 12:40:59.446844   13072 system_pods.go:61] "kube-proxy-m9smd" [d16c9eb2-1d38-4652-880b-d217ba193c1a] Running
	I0908 12:40:59.446844   13072 system_pods.go:61] "kube-scheduler-multinode-818700" [a805a7f8-5277-4087-9ccf-2f2afcc47715] Running
	I0908 12:40:59.446844   13072 system_pods.go:61] "storage-provisioner" [c5177fef-0793-4291-adac-1b9fa372fa06] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 12:40:59.446844   13072 system_pods.go:74] duration metric: took 10.5005ms to wait for pod list to return data ...
	I0908 12:40:59.447105   13072 default_sa.go:34] waiting for default service account to be created ...
	I0908 12:40:59.452429   13072 default_sa.go:45] found service account: "default"
	I0908 12:40:59.452429   13072 default_sa.go:55] duration metric: took 5.3246ms for default service account to be created ...
	I0908 12:40:59.452429   13072 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 12:40:59.457399   13072 system_pods.go:86] 12 kube-system pods found
	I0908 12:40:59.457419   13072 system_pods.go:89] "coredns-66bc5c9577-svhws" [cd9b9019-0603-4fa5-8b64-d23b1f50d4fe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:40:59.457419   13072 system_pods.go:89] "etcd-multinode-818700" [cf243776-ef17-4460-ac8d-1775558b5246] Running
	I0908 12:40:59.457419   13072 system_pods.go:89] "kindnet-5drb9" [7645ef6c-8a22-4f86-9e96-70c0b24ea598] Running
	I0908 12:40:59.457474   13072 system_pods.go:89] "kindnet-chkc2" [114504d7-aec1-449b-9900-a9a3871cdd14] Running
	I0908 12:40:59.457474   13072 system_pods.go:89] "kindnet-jb7kv" [80bdb808-36b1-4069-9919-efcfc7cc5f4b] Running
	I0908 12:40:59.457474   13072 system_pods.go:89] "kube-apiserver-multinode-818700" [27f58fe2-88dc-41bc-9b5e-f5dd0ba551b7] Running
	I0908 12:40:59.457474   13072 system_pods.go:89] "kube-controller-manager-multinode-818700" [c0ff29cc-9c9b-46a9-a34b-1e3da19a80e2] Running
	I0908 12:40:59.457474   13072 system_pods.go:89] "kube-proxy-fb8cd" [edcbee45-3ba2-4746-9d0b-321d40a96c25] Running
	I0908 12:40:59.457474   13072 system_pods.go:89] "kube-proxy-m5ksd" [7300c145-be03-4dae-93df-7b201133bc8a] Running
	I0908 12:40:59.457474   13072 system_pods.go:89] "kube-proxy-m9smd" [d16c9eb2-1d38-4652-880b-d217ba193c1a] Running
	I0908 12:40:59.457474   13072 system_pods.go:89] "kube-scheduler-multinode-818700" [a805a7f8-5277-4087-9ccf-2f2afcc47715] Running
	I0908 12:40:59.457474   13072 system_pods.go:89] "storage-provisioner" [c5177fef-0793-4291-adac-1b9fa372fa06] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 12:40:59.457544   13072 system_pods.go:126] duration metric: took 5.1141ms to wait for k8s-apps to be running ...
	I0908 12:40:59.457544   13072 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 12:40:59.468181   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 12:40:59.497418   13072 system_svc.go:56] duration metric: took 39.7171ms WaitForService to wait for kubelet
	I0908 12:40:59.497418   13072 kubeadm.go:578] duration metric: took 40.5189724s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 12:40:59.497418   13072 node_conditions.go:102] verifying NodePressure condition ...
	I0908 12:40:59.502061   13072 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 12:40:59.502139   13072 node_conditions.go:123] node cpu capacity is 2
	I0908 12:40:59.502175   13072 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 12:40:59.502175   13072 node_conditions.go:123] node cpu capacity is 2
	I0908 12:40:59.502175   13072 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 12:40:59.502175   13072 node_conditions.go:123] node cpu capacity is 2
	I0908 12:40:59.502175   13072 node_conditions.go:105] duration metric: took 4.7567ms to run NodePressure ...
	I0908 12:40:59.502175   13072 start.go:241] waiting for startup goroutines ...
	I0908 12:40:59.502267   13072 start.go:246] waiting for cluster config update ...
	I0908 12:40:59.502267   13072 start.go:255] writing updated cluster config ...
	I0908 12:40:59.506150   13072 out.go:203] 
	I0908 12:40:59.509685   13072 config.go:182] Loaded profile config "ha-331000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:40:59.528241   13072 config.go:182] Loaded profile config "multinode-818700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:40:59.528241   13072 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\config.json ...
	I0908 12:40:59.536613   13072 out.go:179] * Starting "multinode-818700-m02" worker node in "multinode-818700" cluster
	I0908 12:40:59.540466   13072 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0908 12:40:59.540466   13072 cache.go:58] Caching tarball of preloaded images
	I0908 12:40:59.540772   13072 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0908 12:40:59.540772   13072 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0908 12:40:59.541299   13072 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\config.json ...
	I0908 12:40:59.543740   13072 start.go:360] acquireMachinesLock for multinode-818700-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0908 12:40:59.543740   13072 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-818700-m02"
	I0908 12:40:59.543740   13072 start.go:96] Skipping create...Using existing machine configuration
	I0908 12:40:59.543740   13072 fix.go:54] fixHost starting: m02
	I0908 12:40:59.544565   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:41:01.629753   13072 main.go:141] libmachine: [stdout =====>] : Off
	
	I0908 12:41:01.629842   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:01.629842   13072 fix.go:112] recreateIfNeeded on multinode-818700-m02: state=Stopped err=<nil>
	W0908 12:41:01.629842   13072 fix.go:138] unexpected machine state, will restart: <nil>
	I0908 12:41:01.635905   13072 out.go:252] * Restarting existing hyperv VM for "multinode-818700-m02" ...
	I0908 12:41:01.635905   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-818700-m02
	I0908 12:41:04.662338   13072 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:41:04.662338   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:04.662338   13072 main.go:141] libmachine: Waiting for host to start...
	I0908 12:41:04.662338   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:41:07.001780   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:41:07.001780   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:07.002755   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:41:09.599516   13072 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:41:09.599516   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:10.599938   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:41:12.744066   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:41:12.744066   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:12.744066   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:41:15.284831   13072 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:41:15.284831   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:16.285356   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:41:18.520675   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:41:18.520675   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:18.521334   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:41:21.071572   13072 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:41:21.071572   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:22.072225   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:41:24.264384   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:41:24.264384   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:24.264488   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:41:26.755832   13072 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:41:26.755832   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:27.756184   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:41:30.009178   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:41:30.009178   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:30.009724   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:41:32.483427   13072 main.go:141] libmachine: [stdout =====>] : 
	I0908 12:41:32.483427   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:33.484341   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:41:35.696167   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:41:35.697199   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:35.697352   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:41:38.293622   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:41:38.293986   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:38.297103   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:41:40.386086   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:41:40.387236   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:40.387236   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:41:42.921144   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:41:42.921144   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:42.921542   13072 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700\config.json ...
	I0908 12:41:42.924489   13072 machine.go:93] provisionDockerMachine start ...
	I0908 12:41:42.924546   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:41:45.034436   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:41:45.034436   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:45.034436   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:41:47.561523   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:41:47.561832   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:47.568550   13072 main.go:141] libmachine: Using SSH client type: native
	I0908 12:41:47.568712   13072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.54.47 22 <nil> <nil>}
	I0908 12:41:47.568712   13072 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 12:41:47.696061   13072 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0908 12:41:47.696061   13072 buildroot.go:166] provisioning hostname "multinode-818700-m02"
	I0908 12:41:47.696061   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:41:49.841810   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:41:49.842022   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:49.842122   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:41:52.321745   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:41:52.321745   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:52.328085   13072 main.go:141] libmachine: Using SSH client type: native
	I0908 12:41:52.328797   13072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.54.47 22 <nil> <nil>}
	I0908 12:41:52.328913   13072 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-818700-m02 && echo "multinode-818700-m02" | sudo tee /etc/hostname
	I0908 12:41:52.491464   13072 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-818700-m02
	
	I0908 12:41:52.491464   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:41:54.554780   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:41:54.555673   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:54.555673   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:41:57.010949   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:41:57.010949   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:57.016622   13072 main.go:141] libmachine: Using SSH client type: native
	I0908 12:41:57.017048   13072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.54.47 22 <nil> <nil>}
	I0908 12:41:57.017048   13072 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-818700-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-818700-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-818700-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 12:41:57.164603   13072 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 12:41:57.164673   13072 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0908 12:41:57.164673   13072 buildroot.go:174] setting up certificates
	I0908 12:41:57.164673   13072 provision.go:84] configureAuth start
	I0908 12:41:57.164775   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:41:59.335506   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:41:59.336263   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:41:59.336263   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:42:01.827652   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:42:01.827652   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:01.827652   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:42:03.906965   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:42:03.906965   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:03.907602   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:42:06.409304   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:42:06.409304   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:06.409611   13072 provision.go:143] copyHostCerts
	I0908 12:42:06.409782   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0908 12:42:06.410212   13072 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0908 12:42:06.410212   13072 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0908 12:42:06.410769   13072 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0908 12:42:06.412045   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0908 12:42:06.412382   13072 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0908 12:42:06.412382   13072 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0908 12:42:06.412833   13072 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0908 12:42:06.413831   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0908 12:42:06.414196   13072 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0908 12:42:06.414196   13072 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0908 12:42:06.414518   13072 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1671 bytes)
	I0908 12:42:06.415554   13072 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-818700-m02 san=[127.0.0.1 172.20.54.47 localhost minikube multinode-818700-m02]
	I0908 12:42:06.579163   13072 provision.go:177] copyRemoteCerts
	I0908 12:42:06.591241   13072 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 12:42:06.591359   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:42:08.686466   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:42:08.686736   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:08.686736   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:42:11.232959   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:42:11.234039   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:11.234598   13072 sshutil.go:53] new ssh client: &{IP:172.20.54.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700-m02\id_rsa Username:docker}
	I0908 12:42:11.336203   13072 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7446556s)
	I0908 12:42:11.336203   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0908 12:42:11.336797   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 12:42:11.387615   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0908 12:42:11.388393   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0908 12:42:11.441905   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0908 12:42:11.441905   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 12:42:11.495932   13072 provision.go:87] duration metric: took 14.3310139s to configureAuth
	I0908 12:42:11.495932   13072 buildroot.go:189] setting minikube options for container-runtime
	I0908 12:42:11.496804   13072 config.go:182] Loaded profile config "multinode-818700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:42:11.496902   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:42:13.593590   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:42:13.593651   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:13.593651   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:42:16.086198   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:42:16.086198   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:16.092032   13072 main.go:141] libmachine: Using SSH client type: native
	I0908 12:42:16.092362   13072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.54.47 22 <nil> <nil>}
	I0908 12:42:16.092362   13072 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0908 12:42:16.223149   13072 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0908 12:42:16.223227   13072 buildroot.go:70] root file system type: tmpfs
	I0908 12:42:16.223415   13072 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0908 12:42:16.223545   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:42:18.293106   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:42:18.293106   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:18.293819   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:42:20.803122   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:42:20.803122   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:20.809379   13072 main.go:141] libmachine: Using SSH client type: native
	I0908 12:42:20.810385   13072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.54.47 22 <nil> <nil>}
	I0908 12:42:20.810566   13072 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=172.20.59.7"
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0908 12:42:20.961280   13072 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=172.20.59.7
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0908 12:42:20.961280   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:42:23.030201   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:42:23.031313   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:23.031341   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:42:25.534657   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:42:25.534657   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:25.541890   13072 main.go:141] libmachine: Using SSH client type: native
	I0908 12:42:25.542557   13072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.54.47 22 <nil> <nil>}
	I0908 12:42:25.542633   13072 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0908 12:42:27.018819   13072 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I0908 12:42:27.018819   13072 machine.go:96] duration metric: took 44.0937181s to provisionDockerMachine
	I0908 12:42:27.018819   13072 start.go:293] postStartSetup for "multinode-818700-m02" (driver="hyperv")
	I0908 12:42:27.018819   13072 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 12:42:27.032545   13072 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 12:42:27.032545   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:42:29.223462   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:42:29.223462   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:29.223897   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:42:31.711767   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:42:31.712204   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:31.712876   13072 sshutil.go:53] new ssh client: &{IP:172.20.54.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700-m02\id_rsa Username:docker}
	I0908 12:42:31.818924   13072 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7863189s)
	I0908 12:42:31.830624   13072 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 12:42:31.837254   13072 info.go:137] Remote host: Buildroot 2025.02
	I0908 12:42:31.837319   13072 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0908 12:42:31.837884   13072 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0908 12:42:31.839245   13072 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> 116282.pem in /etc/ssl/certs
	I0908 12:42:31.839245   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> /etc/ssl/certs/116282.pem
	I0908 12:42:31.850245   13072 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 12:42:31.871530   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem --> /etc/ssl/certs/116282.pem (1708 bytes)
	I0908 12:42:31.922079   13072 start.go:296] duration metric: took 4.9031977s for postStartSetup
	I0908 12:42:31.922079   13072 fix.go:56] duration metric: took 1m32.377175s for fixHost
	I0908 12:42:31.922205   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:42:34.014982   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:42:34.014982   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:34.015112   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:42:36.532954   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:42:36.534013   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:36.540324   13072 main.go:141] libmachine: Using SSH client type: native
	I0908 12:42:36.540869   13072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.54.47 22 <nil> <nil>}
	I0908 12:42:36.540869   13072 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0908 12:42:36.669940   13072 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757335356.686959859
	
	I0908 12:42:36.670024   13072 fix.go:216] guest clock: 1757335356.686959859
	I0908 12:42:36.670024   13072 fix.go:229] Guest: 2025-09-08 12:42:36.686959859 +0000 UTC Remote: 2025-09-08 12:42:31.9220793 +0000 UTC m=+270.413236201 (delta=4.764880559s)
	I0908 12:42:36.670024   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:42:38.749729   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:42:38.750645   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:38.750753   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:42:41.257010   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:42:41.257010   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:41.263862   13072 main.go:141] libmachine: Using SSH client type: native
	I0908 12:42:41.264582   13072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.54.47 22 <nil> <nil>}
	I0908 12:42:41.264582   13072 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1757335356
	I0908 12:42:41.423551   13072 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Sep  8 12:42:36 UTC 2025
	
	I0908 12:42:41.423612   13072 fix.go:236] clock set: Mon Sep  8 12:42:36 UTC 2025
	 (err=<nil>)
	I0908 12:42:41.423612   13072 start.go:83] releasing machines lock for "multinode-818700-m02", held for 1m41.8785887s
	I0908 12:42:41.423855   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:42:43.482359   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:42:43.482359   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:43.482828   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:42:45.973251   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:42:45.973407   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:45.978162   13072 out.go:179] * Found network options:
	I0908 12:42:45.980594   13072 out.go:179]   - NO_PROXY=172.20.59.7
	W0908 12:42:45.983468   13072 proxy.go:120] fail to check proxy env: Error ip not in block
	I0908 12:42:45.987153   13072 out.go:179]   - NO_PROXY=172.20.59.7
	W0908 12:42:45.992629   13072 proxy.go:120] fail to check proxy env: Error ip not in block
	W0908 12:42:45.994581   13072 proxy.go:120] fail to check proxy env: Error ip not in block
	I0908 12:42:45.997447   13072 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0908 12:42:45.997447   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:42:46.011191   13072 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 12:42:46.011191   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:42:48.157462   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:42:48.157462   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:48.157462   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:42:48.157462   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:48.157462   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:42:48.158369   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:42:50.849454   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:42:50.850270   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:50.850777   13072 sshutil.go:53] new ssh client: &{IP:172.20.54.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700-m02\id_rsa Username:docker}
	I0908 12:42:50.886428   13072 main.go:141] libmachine: [stdout =====>] : 172.20.54.47
	
	I0908 12:42:50.886428   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:50.887861   13072 sshutil.go:53] new ssh client: &{IP:172.20.54.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700-m02\id_rsa Username:docker}
	I0908 12:42:50.942007   13072 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.9307533s)
	W0908 12:42:50.942144   13072 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0908 12:42:50.952857   13072 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 12:42:50.958356   13072 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9608462s)
	W0908 12:42:50.958356   13072 start.go:868] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0908 12:42:50.991280   13072 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0908 12:42:50.991280   13072 start.go:495] detecting cgroup driver to use...
	I0908 12:42:50.991508   13072 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 12:42:51.038877   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0908 12:42:51.073139   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0908 12:42:51.078025   13072 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0908 12:42:51.078434   13072 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0908 12:42:51.098004   13072 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0908 12:42:51.108947   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0908 12:42:51.143600   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 12:42:51.175698   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0908 12:42:51.208739   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 12:42:51.242449   13072 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 12:42:51.278753   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0908 12:42:51.319913   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0908 12:42:51.352933   13072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0908 12:42:51.385498   13072 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 12:42:51.404667   13072 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0908 12:42:51.416405   13072 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0908 12:42:51.449757   13072 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 12:42:51.481799   13072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:42:51.709648   13072 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0908 12:42:51.772715   13072 start.go:495] detecting cgroup driver to use...
	I0908 12:42:51.783418   13072 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0908 12:42:51.823583   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 12:42:51.861663   13072 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 12:42:51.907072   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 12:42:51.947861   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 12:42:51.990473   13072 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0908 12:42:52.064724   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 12:42:52.092720   13072 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 12:42:52.139508   13072 ssh_runner.go:195] Run: which cri-dockerd
	I0908 12:42:52.159518   13072 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0908 12:42:52.178994   13072 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0908 12:42:52.231638   13072 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0908 12:42:52.473156   13072 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0908 12:42:52.690788   13072 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0908 12:42:52.690880   13072 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0908 12:42:52.740223   13072 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0908 12:42:52.777385   13072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:42:53.007826   13072 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0908 12:42:53.797488   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 12:42:53.835101   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0908 12:42:53.869922   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0908 12:42:53.907002   13072 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0908 12:42:54.134769   13072 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0908 12:42:54.358521   13072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:42:54.577981   13072 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0908 12:42:54.648450   13072 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0908 12:42:54.695724   13072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:42:54.915367   13072 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0908 12:42:55.075876   13072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0908 12:42:55.099945   13072 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0908 12:42:55.111838   13072 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0908 12:42:55.120287   13072 start.go:563] Will wait 60s for crictl version
	I0908 12:42:55.130557   13072 ssh_runner.go:195] Run: which crictl
	I0908 12:42:55.147985   13072 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 12:42:55.201709   13072 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0908 12:42:55.211253   13072 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0908 12:42:55.253651   13072 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0908 12:42:55.296256   13072 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0908 12:42:55.298783   13072 out.go:179]   - env NO_PROXY=172.20.59.7
	I0908 12:42:55.301327   13072 ip.go:180] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0908 12:42:55.304392   13072 ip.go:194] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0908 12:42:55.304392   13072 ip.go:194] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0908 12:42:55.304392   13072 ip.go:189] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0908 12:42:55.304392   13072 ip.go:215] Found interface: {Index:17 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4f:5e:c2 Flags:up|broadcast|multicast|running}
	I0908 12:42:55.307392   13072 ip.go:218] interface addr: fe80::a43d:dd17:5b4e:e872/64
	I0908 12:42:55.307392   13072 ip.go:218] interface addr: 172.20.48.1/20
	I0908 12:42:55.316388   13072 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0908 12:42:55.324352   13072 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 12:42:55.350780   13072 mustload.go:65] Loading cluster: multinode-818700
	I0908 12:42:55.351474   13072 config.go:182] Loaded profile config "multinode-818700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:42:55.352251   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:42:57.477666   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:42:57.477666   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:42:57.478070   13072 host.go:66] Checking if "multinode-818700" exists ...
	I0908 12:42:57.478989   13072 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-818700 for IP: 172.20.54.47
	I0908 12:42:57.479112   13072 certs.go:194] generating shared ca certs ...
	I0908 12:42:57.479167   13072 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:42:57.479167   13072 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0908 12:42:57.479984   13072 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0908 12:42:57.479984   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0908 12:42:57.480524   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0908 12:42:57.480844   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0908 12:42:57.480993   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0908 12:42:57.481615   13072 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem (1338 bytes)
	W0908 12:42:57.481973   13072 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628_empty.pem, impossibly tiny 0 bytes
	I0908 12:42:57.482089   13072 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0908 12:42:57.482456   13072 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0908 12:42:57.482792   13072 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0908 12:42:57.482977   13072 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1671 bytes)
	I0908 12:42:57.483655   13072 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem (1708 bytes)
	I0908 12:42:57.483981   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem -> /usr/share/ca-certificates/116282.pem
	I0908 12:42:57.484227   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:42:57.484384   13072 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem -> /usr/share/ca-certificates/11628.pem
	I0908 12:42:57.484384   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 12:42:57.549296   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0908 12:42:57.603656   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 12:42:57.656348   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0908 12:42:57.719343   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\116282.pem --> /usr/share/ca-certificates/116282.pem (1708 bytes)
	I0908 12:42:57.771061   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 12:42:57.824925   13072 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\11628.pem --> /usr/share/ca-certificates/11628.pem (1338 bytes)
	I0908 12:42:57.886853   13072 ssh_runner.go:195] Run: openssl version
	I0908 12:42:57.907285   13072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116282.pem && ln -fs /usr/share/ca-certificates/116282.pem /etc/ssl/certs/116282.pem"
	I0908 12:42:57.937457   13072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116282.pem
	I0908 12:42:57.945585   13072 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 10:54 /usr/share/ca-certificates/116282.pem
	I0908 12:42:57.956386   13072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116282.pem
	I0908 12:42:57.977131   13072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116282.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 12:42:58.009298   13072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 12:42:58.044869   13072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:42:58.054553   13072 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:42:58.070902   13072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:42:58.095219   13072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 12:42:58.132705   13072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11628.pem && ln -fs /usr/share/ca-certificates/11628.pem /etc/ssl/certs/11628.pem"
	I0908 12:42:58.164961   13072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11628.pem
	I0908 12:42:58.173435   13072 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 10:54 /usr/share/ca-certificates/11628.pem
	I0908 12:42:58.183743   13072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11628.pem
	I0908 12:42:58.204082   13072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11628.pem /etc/ssl/certs/51391683.0"
	I0908 12:42:58.236755   13072 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 12:42:58.242801   13072 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0908 12:42:58.242801   13072 kubeadm.go:926] updating node {m02 172.20.54.47 8443 v1.34.0 docker false true} ...
	I0908 12:42:58.242801   13072 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-818700-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.54.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:multinode-818700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 12:42:58.253673   13072 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 12:42:58.277066   13072 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 12:42:58.288891   13072 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0908 12:42:58.308125   13072 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0908 12:42:58.346573   13072 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 12:42:58.394915   13072 ssh_runner.go:195] Run: grep 172.20.59.7	control-plane.minikube.internal$ /etc/hosts
	I0908 12:42:58.401165   13072 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.59.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 12:42:58.437207   13072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:42:58.662492   13072 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:42:58.715977   13072 host.go:66] Checking if "multinode-818700" exists ...
	I0908 12:42:58.716956   13072 start.go:317] joinCluster: &{Name:multinode-818700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
0 ClusterName:multinode-818700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.59.7 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.54.47 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.63.150 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:42:58.717185   13072 start.go:330] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.20.54.47 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0908 12:42:58.717185   13072 host.go:66] Checking if "multinode-818700-m02" exists ...
	I0908 12:42:58.717432   13072 mustload.go:65] Loading cluster: multinode-818700
	I0908 12:42:58.718333   13072 config.go:182] Loaded profile config "multinode-818700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:42:58.718887   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:43:00.841661   13072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:43:00.841661   13072 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:43:00.841661   13072 host.go:66] Checking if "multinode-818700" exists ...
	I0908 12:43:00.841661   13072 api_server.go:166] Checking apiserver status ...
	I0908 12:43:00.853435   13072 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:43:00.853435   13072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	
	
	==> Docker <==
	Sep 08 12:40:03 multinode-818700 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Sep 08 12:40:03 multinode-818700 cri-dockerd[1700]: time="2025-09-08T12:40:03Z" level=info msg="Starting cri-dockerd 0.4.0 (b9b8893)"
	Sep 08 12:40:03 multinode-818700 cri-dockerd[1700]: time="2025-09-08T12:40:03Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Sep 08 12:40:03 multinode-818700 cri-dockerd[1700]: time="2025-09-08T12:40:03Z" level=info msg="Start docker client with request timeout 0s"
	Sep 08 12:40:03 multinode-818700 cri-dockerd[1700]: time="2025-09-08T12:40:03Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Sep 08 12:40:03 multinode-818700 cri-dockerd[1700]: time="2025-09-08T12:40:03Z" level=info msg="Loaded network plugin cni"
	Sep 08 12:40:03 multinode-818700 cri-dockerd[1700]: time="2025-09-08T12:40:03Z" level=info msg="Docker cri networking managed by network plugin cni"
	Sep 08 12:40:03 multinode-818700 cri-dockerd[1700]: time="2025-09-08T12:40:03Z" level=info msg="Setting cgroupDriver cgroupfs"
	Sep 08 12:40:03 multinode-818700 cri-dockerd[1700]: time="2025-09-08T12:40:03Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Sep 08 12:40:03 multinode-818700 cri-dockerd[1700]: time="2025-09-08T12:40:03Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Sep 08 12:40:03 multinode-818700 cri-dockerd[1700]: time="2025-09-08T12:40:03Z" level=info msg="Start cri-dockerd grpc backend"
	Sep 08 12:40:03 multinode-818700 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Sep 08 12:40:09 multinode-818700 cri-dockerd[1700]: time="2025-09-08T12:40:09Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-svhws_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"da2be864ea38e76c0f6a99cd466b48d160ca9c84f8fcbfab2dcb59e65cd1c26d\""
	Sep 08 12:40:09 multinode-818700 cri-dockerd[1700]: time="2025-09-08T12:40:09Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-7b57f96db7-ztvwm_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"ef2fb474e24471810da79ad574291ce8912dfdfa10973245f1c26d500bef6092\""
	Sep 08 12:40:10 multinode-818700 cri-dockerd[1700]: time="2025-09-08T12:40:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f001ecfb262a38542616dc94ab5c19668882413146a39c65aedafbd810394d99/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 12:40:10 multinode-818700 cri-dockerd[1700]: time="2025-09-08T12:40:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/91b43e950145158327a98165b2eb691834b0441886ace320b2a25dc9e0756b8b/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 12:40:10 multinode-818700 cri-dockerd[1700]: time="2025-09-08T12:40:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e30f02e799efb07e2fc540cd26b0fde073f5c7eba08407eb2ca57c3c549a8dc6/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 12:40:10 multinode-818700 cri-dockerd[1700]: time="2025-09-08T12:40:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/59184df800f3e9788fa4af12163ca90340b40e4a94e3f1cf287bf60240c208d9/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 12:40:15 multinode-818700 cri-dockerd[1700]: time="2025-09-08T12:40:15Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 08 12:40:16 multinode-818700 cri-dockerd[1700]: time="2025-09-08T12:40:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b04727bec497e65c39e6847cdc5768087a54f6b962f53cb1ebbc5d2164818b12/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 12:40:16 multinode-818700 cri-dockerd[1700]: time="2025-09-08T12:40:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/def5c405238219c7e86b2c964d20e34d21241ad3572e0cb9cdbd0cf3af504934/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 12:40:16 multinode-818700 cri-dockerd[1700]: time="2025-09-08T12:40:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c92cc25ab9fb90dbcc9626a7c79b97e148c9261b99caf10a885b0adf53fd3db5/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 12:40:47 multinode-818700 dockerd[1331]: time="2025-09-08T12:40:47.569103848Z" level=info msg="ignoring event" container=eeede9ee6c97df31bd5628f94ca35c7c5efe4f53329d0bc39872cb643a81807b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 08 12:41:19 multinode-818700 cri-dockerd[1700]: time="2025-09-08T12:41:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/91bd6485ab890e2d1d40cdec529b1a4490433c9055f1949bbc5049e1cce0bb93/resolv.conf as [nameserver 172.20.48.1]"
	Sep 08 12:41:20 multinode-818700 cri-dockerd[1700]: time="2025-09-08T12:41:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fdc962ad6d31dbfb24287b8625fd07b70d2d27263ecc39e93a3d0aadc27bbdfb/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bf29fadc96b51       8c811b4aec35f                                                                                         2 minutes ago       Running             busybox                   1                   fdc962ad6d31d       busybox-7b57f96db7-ztvwm
	562debcc2bcef       52546a367cc9e                                                                                         2 minutes ago       Running             coredns                   1                   91bd6485ab890       coredns-66bc5c9577-svhws
	a148c987dc622       6e38f40d628db                                                                                         2 minutes ago       Running             storage-provisioner       2                   def5c40523821       storage-provisioner
	8b90ae57c06ec       409467f978b4a                                                                                         3 minutes ago       Running             kindnet-cni               1                   c92cc25ab9fb9       kindnet-5drb9
	eeede9ee6c97d       6e38f40d628db                                                                                         3 minutes ago       Exited              storage-provisioner       1                   def5c40523821       storage-provisioner
	8b62c35b5fad9       df0860106674d                                                                                         3 minutes ago       Running             kube-proxy                1                   b04727bec497e       kube-proxy-m5ksd
	7769798aa7e4e       5f1f5298c888d                                                                                         3 minutes ago       Running             etcd                      0                   59184df800f3e       etcd-multinode-818700
	050b3801f1c32       90550c43ad2bc                                                                                         3 minutes ago       Running             kube-apiserver            0                   e30f02e799efb       kube-apiserver-multinode-818700
	d54aa30983d44       46169d968e920                                                                                         3 minutes ago       Running             kube-scheduler            1                   91b43e9501451       kube-scheduler-multinode-818700
	afe853e710c10       a0af72f2ec6d6                                                                                         3 minutes ago       Running             kube-controller-manager   1                   f001ecfb262a3       kube-controller-manager-multinode-818700
	b1bc7b0f492c1       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Exited              busybox                   0                   ef2fb474e2447       busybox-7b57f96db7-ztvwm
	4b397652bed65       52546a367cc9e                                                                                         26 minutes ago      Exited              coredns                   0                   da2be864ea38e       coredns-66bc5c9577-svhws
	0e97a2b4abd9c       kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a              27 minutes ago      Exited              kindnet-cni               0                   cf6168c36a0f9       kindnet-5drb9
	a793eb6b8d638       df0860106674d                                                                                         27 minutes ago      Exited              kube-proxy                0                   7e1c24e28ed9f       kube-proxy-m5ksd
	4ef5a92069c26       a0af72f2ec6d6                                                                                         27 minutes ago      Exited              kube-controller-manager   0                   e229cb205b5d0       kube-controller-manager-multinode-818700
	07ac3a29d9318       46169d968e920                                                                                         27 minutes ago      Exited              kube-scheduler            0                   a817a17208da8       kube-scheduler-multinode-818700
	
	
	==> coredns [4b397652bed6] <==
	[INFO] 10.244.0.3:36873 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000178403s
	[INFO] 10.244.0.3:52041 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000272205s
	[INFO] 10.244.0.3:55840 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000247405s
	[INFO] 10.244.0.3:35598 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000075002s
	[INFO] 10.244.0.3:43403 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000162403s
	[INFO] 10.244.0.3:33397 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000199704s
	[INFO] 10.244.0.3:49318 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124702s
	[INFO] 10.244.1.2:35974 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000267706s
	[INFO] 10.244.1.2:49846 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000223804s
	[INFO] 10.244.1.2:58033 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000146203s
	[INFO] 10.244.1.2:51546 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000144103s
	[INFO] 10.244.0.3:37430 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114603s
	[INFO] 10.244.0.3:54627 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000342107s
	[INFO] 10.244.0.3:39321 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000345607s
	[INFO] 10.244.0.3:51976 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000306507s
	[INFO] 10.244.1.2:59187 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168503s
	[INFO] 10.244.1.2:56345 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000162604s
	[INFO] 10.244.1.2:46830 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000160803s
	[INFO] 10.244.1.2:38005 - 5 "PTR IN 1.48.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000196004s
	[INFO] 10.244.0.3:60239 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178904s
	[INFO] 10.244.0.3:42627 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000233606s
	[INFO] 10.244.0.3:36186 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000076402s
	[INFO] 10.244.0.3:36334 - 5 "PTR IN 1.48.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000132303s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [562debcc2bce] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = af5a539bf320e7b94b440c1d74d092803773c352ef29bc4c36765038fd6da32ae744d176504ed878e1c79d2cff9d6ca453184f4d25fde4d65085f86eb360206d
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49859 - 44017 "HINFO IN 542983811155836255.6958005277864769484. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.032052979s
	
	
	==> describe nodes <==
	Name:               multinode-818700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-818700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64
	                    minikube.k8s.io/name=multinode-818700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T12_16_07_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 12:16:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-818700
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 12:43:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 12:40:58 +0000   Mon, 08 Sep 2025 12:16:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 12:40:58 +0000   Mon, 08 Sep 2025 12:16:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 12:40:58 +0000   Mon, 08 Sep 2025 12:16:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 12:40:58 +0000   Mon, 08 Sep 2025 12:40:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.59.7
	  Hostname:    multinode-818700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	System Info:
	  Machine ID:                 4e433fd2f20b4905a8f73ae4e8031898
	  System UUID:                aa27505c-10ba-8642-a967-ec436ee1d0a0
	  Boot ID:                    5f3f2cfb-70d2-4561-91bf-c0c649b9fdc1
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-ztvwm                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 coredns-66bc5c9577-svhws                    100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     27m
	  kube-system                 etcd-multinode-818700                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         3m10s
	  kube-system                 kindnet-5drb9                               100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      27m
	  kube-system                 kube-apiserver-multinode-818700             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m10s
	  kube-system                 kube-controller-manager-multinode-818700    200m (10%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-proxy-m5ksd                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-scheduler-multinode-818700             100m (5%)     0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (7%)  220Mi (7%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 27m                    kube-proxy       
	  Normal   Starting                 3m7s                   kube-proxy       
	  Normal   Starting                 27m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  27m (x8 over 27m)      kubelet          Node multinode-818700 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    27m (x8 over 27m)      kubelet          Node multinode-818700 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     27m (x7 over 27m)      kubelet          Node multinode-818700 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 27m                    kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    27m                    kubelet          Node multinode-818700 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  27m                    kubelet          Node multinode-818700 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     27m                    kubelet          Node multinode-818700 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           27m                    node-controller  Node multinode-818700 event: Registered Node multinode-818700 in Controller
	  Normal   NodeReady                26m                    kubelet          Node multinode-818700 status is now: NodeReady
	  Normal   Starting                 3m16s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  3m16s (x8 over 3m16s)  kubelet          Node multinode-818700 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m16s (x8 over 3m16s)  kubelet          Node multinode-818700 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m16s (x7 over 3m16s)  kubelet          Node multinode-818700 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  3m16s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 3m10s                  kubelet          Node multinode-818700 has been rebooted, boot id: 5f3f2cfb-70d2-4561-91bf-c0c649b9fdc1
	  Normal   RegisteredNode           3m7s                   node-controller  Node multinode-818700 event: Registered Node multinode-818700 in Controller
	
	
	Name:               multinode-818700-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-818700-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64
	                    minikube.k8s.io/name=multinode-818700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_08T12_19_20_0700
	                    minikube.k8s.io/version=v1.36.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 12:19:19 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-818700-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 12:37:04 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 08 Sep 2025 12:37:00 +0000   Mon, 08 Sep 2025 12:41:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 08 Sep 2025 12:37:00 +0000   Mon, 08 Sep 2025 12:41:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 08 Sep 2025 12:37:00 +0000   Mon, 08 Sep 2025 12:41:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 08 Sep 2025 12:37:00 +0000   Mon, 08 Sep 2025 12:41:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.20.62.186
	  Hostname:    multinode-818700-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976484Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976484Ki
	  pods:               110
	System Info:
	  Machine ID:                 3295ff67a04d4a15823f17f0c1453bd5
	  System UUID:                ac897804-3d21-a64e-960d-5d53bcb60fdc
	  Boot ID:                    7c864719-b6ad-4966-841e-c0feb0da713e
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-ndqg5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kindnet-chkc2               100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      24m
	  kube-system                 kube-proxy-m9smd            0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (1%)  50Mi (1%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeHasSufficientMemory  24m (x3 over 24m)  kubelet          Node multinode-818700-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x3 over 24m)  kubelet          Node multinode-818700-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x3 over 24m)  kubelet          Node multinode-818700-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     24m                cidrAllocator    Node multinode-818700-m02 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           24m                node-controller  Node multinode-818700-m02 event: Registered Node multinode-818700-m02 in Controller
	  Normal  NodeReady                23m                kubelet          Node multinode-818700-m02 status is now: NodeReady
	  Normal  RegisteredNode           3m7s               node-controller  Node multinode-818700-m02 event: Registered Node multinode-818700-m02 in Controller
	  Normal  NodeNotReady             2m17s              node-controller  Node multinode-818700-m02 status is now: NodeNotReady
	
	
	Name:               multinode-818700-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-818700-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64
	                    minikube.k8s.io/name=multinode-818700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_08T12_35_27_0700
	                    minikube.k8s.io/version=v1.36.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 12:35:26 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-818700-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 12:36:38 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 08 Sep 2025 12:35:44 +0000   Mon, 08 Sep 2025 12:37:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 08 Sep 2025 12:35:44 +0000   Mon, 08 Sep 2025 12:37:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 08 Sep 2025 12:35:44 +0000   Mon, 08 Sep 2025 12:37:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 08 Sep 2025 12:35:44 +0000   Mon, 08 Sep 2025 12:37:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.20.63.150
	  Hostname:    multinode-818700-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	System Info:
	  Machine ID:                 6ea0d7a7a35149bfad7131a87105f9bc
	  System UUID:                96a8522b-481e-d148-be98-b3434274b2f1
	  Boot ID:                    af401af6-d860-4609-a464-016180da7c73
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.5.0/24
	PodCIDRs:                     10.244.5.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-jb7kv       100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      19m
	  kube-system                 kube-proxy-fb8cd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (1%)  50Mi (1%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 18m                    kube-proxy       
	  Normal  Starting                 7m55s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  19m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m (x3 over 19m)      kubelet          Node multinode-818700-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x3 over 19m)      kubelet          Node multinode-818700-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x3 over 19m)      kubelet          Node multinode-818700-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                18m                    kubelet          Node multinode-818700-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  7m59s (x3 over 7m59s)  kubelet          Node multinode-818700-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m59s (x3 over 7m59s)  kubelet          Node multinode-818700-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m59s (x3 over 7m59s)  kubelet          Node multinode-818700-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m54s                  node-controller  Node multinode-818700-m03 event: Registered Node multinode-818700-m03 in Controller
	  Normal  NodeReady                7m41s                  kubelet          Node multinode-818700-m03 status is now: NodeReady
	  Normal  NodeNotReady             5m54s                  node-controller  Node multinode-818700-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           3m7s                   node-controller  Node multinode-818700-m03 event: Registered Node multinode-818700-m03 in Controller
	
	
	==> dmesg <==
	[Sep 8 12:38] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000001] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +0.002267] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.000011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.002502] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	              * this clock source is slow. Consider trying other clock sources
	[  +0.161859] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +0.000071] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.017712] (rpcbind)[114]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.756054] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep 8 12:40] kauditd_printk_skb: 144 callbacks suppressed
	[  +0.165096] kauditd_printk_skb: 259 callbacks suppressed
	[  +7.187347] kauditd_printk_skb: 159 callbacks suppressed
	[ +17.523472] kauditd_printk_skb: 170 callbacks suppressed
	[Sep 8 12:41] kauditd_printk_skb: 13 callbacks suppressed
	[ +15.851151] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [7769798aa7e4] <==
	{"level":"warn","ts":"2025-09-08T12:40:13.717764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:40:13.738681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:40:13.747745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:40:13.762333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:40:13.777173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:40:13.816650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:40:13.824340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:40:13.867511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:40:13.868895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:40:13.892550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:40:13.938882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:40:13.940931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:40:13.966094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:40:13.991789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:40:14.003987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:40:14.019801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:40:14.034210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:40:14.063595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:40:14.076118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:40:14.097313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:40:14.112136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:40:14.129149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:40:14.138688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:40:14.276405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46830","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T12:40:15.761797Z","caller":"traceutil/trace.go:172","msg":"trace[513484271] transaction","detail":"{read_only:false; response_revision:1860; number_of_response:1; }","duration":"133.538794ms","start":"2025-09-08T12:40:15.628239Z","end":"2025-09-08T12:40:15.761778Z","steps":["trace[513484271] 'process raft request'  (duration: 133.425093ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:43:25 up 5 min,  0 users,  load average: 0.48, 0.35, 0.16
	Linux multinode-818700 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep  4 13:14:36 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kindnet [0e97a2b4abd9] <==
	I0908 12:36:51.605030       1 main.go:324] Node multinode-818700-m03 has CIDR [10.244.5.0/24] 
	I0908 12:37:01.603621       1 main.go:297] Handling node with IPs: map[172.20.50.55:{}]
	I0908 12:37:01.604012       1 main.go:301] handling current node
	I0908 12:37:01.604300       1 main.go:297] Handling node with IPs: map[172.20.62.186:{}]
	I0908 12:37:01.604566       1 main.go:324] Node multinode-818700-m02 has CIDR [10.244.1.0/24] 
	I0908 12:37:01.605030       1 main.go:297] Handling node with IPs: map[172.20.63.150:{}]
	I0908 12:37:01.605138       1 main.go:324] Node multinode-818700-m03 has CIDR [10.244.5.0/24] 
	I0908 12:37:11.610920       1 main.go:297] Handling node with IPs: map[172.20.50.55:{}]
	I0908 12:37:11.610954       1 main.go:301] handling current node
	I0908 12:37:11.610971       1 main.go:297] Handling node with IPs: map[172.20.62.186:{}]
	I0908 12:37:11.610978       1 main.go:324] Node multinode-818700-m02 has CIDR [10.244.1.0/24] 
	I0908 12:37:11.611349       1 main.go:297] Handling node with IPs: map[172.20.63.150:{}]
	I0908 12:37:11.611424       1 main.go:324] Node multinode-818700-m03 has CIDR [10.244.5.0/24] 
	I0908 12:37:21.604443       1 main.go:297] Handling node with IPs: map[172.20.50.55:{}]
	I0908 12:37:21.607229       1 main.go:301] handling current node
	I0908 12:37:21.607299       1 main.go:297] Handling node with IPs: map[172.20.62.186:{}]
	I0908 12:37:21.607310       1 main.go:324] Node multinode-818700-m02 has CIDR [10.244.1.0/24] 
	I0908 12:37:21.607964       1 main.go:297] Handling node with IPs: map[172.20.63.150:{}]
	I0908 12:37:21.608042       1 main.go:324] Node multinode-818700-m03 has CIDR [10.244.5.0/24] 
	I0908 12:37:31.604414       1 main.go:297] Handling node with IPs: map[172.20.50.55:{}]
	I0908 12:37:31.604578       1 main.go:301] handling current node
	I0908 12:37:31.604599       1 main.go:297] Handling node with IPs: map[172.20.62.186:{}]
	I0908 12:37:31.604607       1 main.go:324] Node multinode-818700-m02 has CIDR [10.244.1.0/24] 
	I0908 12:37:31.604872       1 main.go:297] Handling node with IPs: map[172.20.63.150:{}]
	I0908 12:37:31.604882       1 main.go:324] Node multinode-818700-m03 has CIDR [10.244.5.0/24] 
	
	
	==> kindnet [8b90ae57c06e] <==
	I0908 12:42:38.463945       1 main.go:324] Node multinode-818700-m03 has CIDR [10.244.5.0/24] 
	I0908 12:42:48.463349       1 main.go:297] Handling node with IPs: map[172.20.59.7:{}]
	I0908 12:42:48.463385       1 main.go:301] handling current node
	I0908 12:42:48.463470       1 main.go:297] Handling node with IPs: map[172.20.62.186:{}]
	I0908 12:42:48.463588       1 main.go:324] Node multinode-818700-m02 has CIDR [10.244.1.0/24] 
	I0908 12:42:48.463826       1 main.go:297] Handling node with IPs: map[172.20.63.150:{}]
	I0908 12:42:48.463974       1 main.go:324] Node multinode-818700-m03 has CIDR [10.244.5.0/24] 
	I0908 12:42:58.459991       1 main.go:297] Handling node with IPs: map[172.20.59.7:{}]
	I0908 12:42:58.460213       1 main.go:301] handling current node
	I0908 12:42:58.460240       1 main.go:297] Handling node with IPs: map[172.20.62.186:{}]
	I0908 12:42:58.460248       1 main.go:324] Node multinode-818700-m02 has CIDR [10.244.1.0/24] 
	I0908 12:42:58.460934       1 main.go:297] Handling node with IPs: map[172.20.63.150:{}]
	I0908 12:42:58.461088       1 main.go:324] Node multinode-818700-m03 has CIDR [10.244.5.0/24] 
	I0908 12:43:08.463059       1 main.go:297] Handling node with IPs: map[172.20.59.7:{}]
	I0908 12:43:08.463226       1 main.go:301] handling current node
	I0908 12:43:08.463264       1 main.go:297] Handling node with IPs: map[172.20.62.186:{}]
	I0908 12:43:08.463278       1 main.go:324] Node multinode-818700-m02 has CIDR [10.244.1.0/24] 
	I0908 12:43:08.463950       1 main.go:297] Handling node with IPs: map[172.20.63.150:{}]
	I0908 12:43:08.463969       1 main.go:324] Node multinode-818700-m03 has CIDR [10.244.5.0/24] 
	I0908 12:43:18.460041       1 main.go:297] Handling node with IPs: map[172.20.59.7:{}]
	I0908 12:43:18.460141       1 main.go:301] handling current node
	I0908 12:43:18.460160       1 main.go:297] Handling node with IPs: map[172.20.62.186:{}]
	I0908 12:43:18.460168       1 main.go:324] Node multinode-818700-m02 has CIDR [10.244.1.0/24] 
	I0908 12:43:18.497715       1 main.go:297] Handling node with IPs: map[172.20.63.150:{}]
	I0908 12:43:18.498028       1 main.go:324] Node multinode-818700-m03 has CIDR [10.244.5.0/24] 
	
	
	==> kube-apiserver [050b3801f1c3] <==
	I0908 12:40:15.207330       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0908 12:40:15.207507       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0908 12:40:15.208157       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I0908 12:40:15.208190       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0908 12:40:15.208289       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0908 12:40:15.214194       1 cache.go:39] Caches are synced for autoregister controller
	I0908 12:40:15.217563       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0908 12:40:15.221009       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0908 12:40:15.221026       1 policy_source.go:240] refreshing policies
	I0908 12:40:15.236339       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0908 12:40:15.329936       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0908 12:40:16.027529       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0908 12:40:16.658105       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.20.50.55 172.20.59.7]
	I0908 12:40:16.670289       1 controller.go:667] quota admission added evaluator for: endpoints
	I0908 12:40:16.687345       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0908 12:40:18.165761       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0908 12:40:18.433456       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0908 12:40:18.891372       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0908 12:40:18.913347       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0908 12:40:18.920571       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	W0908 12:40:36.635256       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.20.59.7]
	I0908 12:41:24.077736       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:41:35.197740       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:42:35.572431       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:42:42.119530       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [4ef5a92069c2] <==
	I0908 12:16:11.106571       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0908 12:16:11.204213       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-818700" podCIDRs=["10.244.0.0/24"]
	I0908 12:16:36.089332       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0908 12:19:19.891019       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-818700-m02\" does not exist"
	I0908 12:19:19.946985       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-818700-m02" podCIDRs=["10.244.1.0/24"]
	E0908 12:19:20.013294       1 range_allocator.go:433] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-818700-m02\" is invalid: [spec.podCIDRs: Invalid value: [\"10.244.2.0/24\",\"10.244.1.0/24\"]: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-818700-m02" podCIDRs=["10.244.2.0/24"]
	E0908 12:19:20.013360       1 range_allocator.go:439] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-818700-m02\" is invalid: [spec.podCIDRs: Invalid value: [\"10.244.2.0/24\",\"10.244.1.0/24\"]: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-818700-m02"
	E0908 12:19:20.013400       1 range_allocator.go:252] "Unhandled Error" err="error syncing 'multinode-818700-m02': failed to patch node CIDR: Node \"multinode-818700-m02\" is invalid: [spec.podCIDRs: Invalid value: [\"10.244.2.0/24\",\"10.244.1.0/24\"]: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0908 12:19:21.119846       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-818700-m02"
	I0908 12:19:52.072790       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-818700-m02"
	I0908 12:24:14.383891       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-818700-m03\" does not exist"
	I0908 12:24:14.384077       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-818700-m02"
	I0908 12:24:14.437793       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-818700-m03" podCIDRs=["10.244.3.0/24"]
	E0908 12:24:14.500812       1 range_allocator.go:433] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-818700-m03\" is invalid: [spec.podCIDRs: Invalid value: [\"10.244.4.0/24\",\"10.244.3.0/24\"]: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-818700-m03" podCIDRs=["10.244.4.0/24"]
	E0908 12:24:14.501210       1 range_allocator.go:439] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-818700-m03\" is invalid: [spec.podCIDRs: Invalid value: [\"10.244.4.0/24\",\"10.244.3.0/24\"]: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-818700-m03"
	E0908 12:24:14.501962       1 range_allocator.go:252] "Unhandled Error" err="error syncing 'multinode-818700-m03': failed to patch node CIDR: Node \"multinode-818700-m03\" is invalid: [spec.podCIDRs: Invalid value: [\"10.244.4.0/24\",\"10.244.3.0/24\"]: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0908 12:24:16.198396       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-818700-m03"
	I0908 12:24:47.657796       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-818700-m02"
	I0908 12:32:56.339360       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-818700-m02"
	I0908 12:35:20.714693       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-818700-m02"
	I0908 12:35:26.735025       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-818700-m03\" does not exist"
	I0908 12:35:26.735143       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-818700-m02"
	I0908 12:35:26.748306       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-818700-m03" podCIDRs=["10.244.5.0/24"]
	I0908 12:35:44.882083       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-818700-m02"
	I0908 12:37:31.488493       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-818700-m02"
	
	
	==> kube-controller-manager [afe853e710c1] <==
	I0908 12:40:18.747750       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0908 12:40:18.748917       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-818700-m02"
	I0908 12:40:18.752954       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0908 12:40:18.753120       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0908 12:40:18.752982       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0908 12:40:18.757617       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0908 12:40:18.758188       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 12:40:18.760441       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0908 12:40:18.760825       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0908 12:40:18.762436       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0908 12:40:18.764770       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0908 12:40:18.767966       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0908 12:40:18.768077       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0908 12:40:18.774018       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-818700"
	I0908 12:40:18.774291       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-818700-m02"
	I0908 12:40:18.774408       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-818700-m03"
	I0908 12:40:18.774749       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0908 12:40:18.777229       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0908 12:40:18.784345       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0908 12:40:18.787293       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0908 12:40:18.792117       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 12:40:18.795534       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0908 12:40:18.797230       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 12:40:18.821912       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 12:40:58.980101       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-818700-m02"
	
	
	==> kube-proxy [8b62c35b5fad] <==
	I0908 12:40:17.861008       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 12:40:17.962654       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 12:40:17.962702       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["172.20.59.7"]
	E0908 12:40:17.962770       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 12:40:18.123503       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0908 12:40:18.123572       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0908 12:40:18.123679       1 server_linux.go:132] "Using iptables Proxier"
	I0908 12:40:18.135604       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 12:40:18.137691       1 server.go:527] "Version info" version="v1.34.0"
	I0908 12:40:18.137729       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 12:40:18.153987       1 config.go:200] "Starting service config controller"
	I0908 12:40:18.154039       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 12:40:18.154061       1 config.go:106] "Starting endpoint slice config controller"
	I0908 12:40:18.154066       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 12:40:18.154078       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 12:40:18.154085       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 12:40:18.155047       1 config.go:309] "Starting node config controller"
	I0908 12:40:18.155079       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 12:40:18.155085       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 12:40:18.254855       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 12:40:18.254901       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 12:40:18.254915       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [a793eb6b8d63] <==
	I0908 12:16:13.616093       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 12:16:13.717598       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 12:16:13.717689       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["172.20.50.55"]
	E0908 12:16:13.718082       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 12:16:13.775671       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0908 12:16:13.775749       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0908 12:16:13.775834       1 server_linux.go:132] "Using iptables Proxier"
	I0908 12:16:13.792255       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 12:16:13.792951       1 server.go:527] "Version info" version="v1.34.0"
	I0908 12:16:13.792988       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 12:16:13.794927       1 config.go:200] "Starting service config controller"
	I0908 12:16:13.794963       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 12:16:13.795318       1 config.go:106] "Starting endpoint slice config controller"
	I0908 12:16:13.795358       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 12:16:13.795376       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 12:16:13.795381       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 12:16:13.801611       1 config.go:309] "Starting node config controller"
	I0908 12:16:13.801681       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 12:16:13.801690       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 12:16:13.895726       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 12:16:13.895765       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 12:16:13.895770       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [07ac3a29d931] <==
	E0908 12:16:03.147530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0908 12:16:04.026388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0908 12:16:04.030740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0908 12:16:04.109598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0908 12:16:04.149611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0908 12:16:04.177331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0908 12:16:04.178825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0908 12:16:04.274373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0908 12:16:04.297466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0908 12:16:04.305787       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0908 12:16:04.343496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0908 12:16:04.391271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 12:16:04.417137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0908 12:16:04.470816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 12:16:04.482472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0908 12:16:04.494585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0908 12:16:04.533153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0908 12:16:04.546388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0908 12:16:04.586267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0908 12:16:04.655917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I0908 12:16:06.211726       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 12:37:40.269551       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0908 12:37:40.290454       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0908 12:37:40.293208       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0908 12:37:40.293242       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d54aa30983d4] <==
	I0908 12:40:12.655262       1 serving.go:386] Generated self-signed cert in-memory
	W0908 12:40:15.045102       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0908 12:40:15.045192       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0908 12:40:15.045208       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0908 12:40:15.045961       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0908 12:40:15.165948       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 12:40:15.165980       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 12:40:15.169811       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 12:40:15.169889       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 12:40:15.170514       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 12:40:15.170925       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 12:40:15.272560       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 12:40:47 multinode-818700 kubelet[2071]: E0908 12:40:47.000520    2071 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 08 12:40:47 multinode-818700 kubelet[2071]: E0908 12:40:47.000664    2071 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cd9b9019-0603-4fa5-8b64-d23b1f50d4fe-config-volume podName:cd9b9019-0603-4fa5-8b64-d23b1f50d4fe nodeName:}" failed. No retries permitted until 2025-09-08 12:41:19.000629121 +0000 UTC m=+69.932460795 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/cd9b9019-0603-4fa5-8b64-d23b1f50d4fe-config-volume") pod "coredns-66bc5c9577-svhws" (UID: "cd9b9019-0603-4fa5-8b64-d23b1f50d4fe") : object "kube-system"/"coredns" not registered
	Sep 08 12:40:47 multinode-818700 kubelet[2071]: E0908 12:40:47.101870    2071 projected.go:291] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Sep 08 12:40:47 multinode-818700 kubelet[2071]: E0908 12:40:47.102041    2071 projected.go:196] Error preparing data for projected volume kube-api-access-78tbl for pod default/busybox-7b57f96db7-ztvwm: object "default"/"kube-root-ca.crt" not registered
	Sep 08 12:40:47 multinode-818700 kubelet[2071]: E0908 12:40:47.102186    2071 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/95c2663a-c807-4987-96e5-c595da610ef5-kube-api-access-78tbl podName:95c2663a-c807-4987-96e5-c595da610ef5 nodeName:}" failed. No retries permitted until 2025-09-08 12:41:19.10216773 +0000 UTC m=+70.033999404 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-78tbl" (UniqueName: "kubernetes.io/projected/95c2663a-c807-4987-96e5-c595da610ef5-kube-api-access-78tbl") pod "busybox-7b57f96db7-ztvwm" (UID: "95c2663a-c807-4987-96e5-c595da610ef5") : object "default"/"kube-root-ca.crt" not registered
	Sep 08 12:40:47 multinode-818700 kubelet[2071]: E0908 12:40:47.346752    2071 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-66bc5c9577-svhws" podUID="cd9b9019-0603-4fa5-8b64-d23b1f50d4fe"
	Sep 08 12:40:47 multinode-818700 kubelet[2071]: E0908 12:40:47.348625    2071 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7b57f96db7-ztvwm" podUID="95c2663a-c807-4987-96e5-c595da610ef5"
	Sep 08 12:40:48 multinode-818700 kubelet[2071]: I0908 12:40:48.001860    2071 scope.go:117] "RemoveContainer" containerID="51939f01ba778f6310dfc69d7b5b8d41100b664fa6c4de1a25b100f6ad8db7e3"
	Sep 08 12:40:48 multinode-818700 kubelet[2071]: I0908 12:40:48.002336    2071 scope.go:117] "RemoveContainer" containerID="eeede9ee6c97df31bd5628f94ca35c7c5efe4f53329d0bc39872cb643a81807b"
	Sep 08 12:40:48 multinode-818700 kubelet[2071]: E0908 12:40:48.002462    2071 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c5177fef-0793-4291-adac-1b9fa372fa06)\"" pod="kube-system/storage-provisioner" podUID="c5177fef-0793-4291-adac-1b9fa372fa06"
	Sep 08 12:40:49 multinode-818700 kubelet[2071]: E0908 12:40:49.346265    2071 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7b57f96db7-ztvwm" podUID="95c2663a-c807-4987-96e5-c595da610ef5"
	Sep 08 12:40:49 multinode-818700 kubelet[2071]: E0908 12:40:49.346788    2071 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-66bc5c9577-svhws" podUID="cd9b9019-0603-4fa5-8b64-d23b1f50d4fe"
	Sep 08 12:40:51 multinode-818700 kubelet[2071]: E0908 12:40:51.354335    2071 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7b57f96db7-ztvwm" podUID="95c2663a-c807-4987-96e5-c595da610ef5"
	Sep 08 12:40:51 multinode-818700 kubelet[2071]: E0908 12:40:51.354505    2071 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-66bc5c9577-svhws" podUID="cd9b9019-0603-4fa5-8b64-d23b1f50d4fe"
	Sep 08 12:40:53 multinode-818700 kubelet[2071]: E0908 12:40:53.354522    2071 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7b57f96db7-ztvwm" podUID="95c2663a-c807-4987-96e5-c595da610ef5"
	Sep 08 12:40:53 multinode-818700 kubelet[2071]: E0908 12:40:53.354806    2071 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-66bc5c9577-svhws" podUID="cd9b9019-0603-4fa5-8b64-d23b1f50d4fe"
	Sep 08 12:40:55 multinode-818700 kubelet[2071]: E0908 12:40:55.346750    2071 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-66bc5c9577-svhws" podUID="cd9b9019-0603-4fa5-8b64-d23b1f50d4fe"
	Sep 08 12:40:55 multinode-818700 kubelet[2071]: E0908 12:40:55.349404    2071 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7b57f96db7-ztvwm" podUID="95c2663a-c807-4987-96e5-c595da610ef5"
	Sep 08 12:40:57 multinode-818700 kubelet[2071]: E0908 12:40:57.353817    2071 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7b57f96db7-ztvwm" podUID="95c2663a-c807-4987-96e5-c595da610ef5"
	Sep 08 12:40:57 multinode-818700 kubelet[2071]: E0908 12:40:57.353931    2071 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-66bc5c9577-svhws" podUID="cd9b9019-0603-4fa5-8b64-d23b1f50d4fe"
	Sep 08 12:40:58 multinode-818700 kubelet[2071]: I0908 12:40:58.956498    2071 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Sep 08 12:41:03 multinode-818700 kubelet[2071]: I0908 12:41:03.347258    2071 scope.go:117] "RemoveContainer" containerID="eeede9ee6c97df31bd5628f94ca35c7c5efe4f53329d0bc39872cb643a81807b"
	Sep 08 12:41:09 multinode-818700 kubelet[2071]: I0908 12:41:09.339968    2071 scope.go:117] "RemoveContainer" containerID="19b41e0f8bcfee5ece24f3e69862f29728823557a405accea1738d0d91151e5f"
	Sep 08 12:41:09 multinode-818700 kubelet[2071]: I0908 12:41:09.393502    2071 scope.go:117] "RemoveContainer" containerID="3ae48749732c0b82072df2f89e8ac8c86d11ad3bc9533d64820f712d8f898b56"
	Sep 08 12:41:19 multinode-818700 kubelet[2071]: I0908 12:41:19.658983    2071 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91bd6485ab890e2d1d40cdec529b1a4490433c9055f1949bbc5049e1cce0bb93"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-818700 -n multinode-818700
helpers_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-818700 -n multinode-818700: (12.9809381s)
helpers_test.go:269: (dbg) Run:  kubectl --context multinode-818700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (439.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (302.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-208500 --memory=3072 --alsologtostderr -v=5 --driver=hyperv
no_kubernetes_test.go:97: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-208500 --memory=3072 --alsologtostderr -v=5 --driver=hyperv: exit status 1 (4m59.6955652s)

                                                
                                                
-- stdout --
	* [NoKubernetes-208500] minikube v1.36.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-208500" primary control-plane node in "NoKubernetes-208500" cluster

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 12:59:05.517021    6196 out.go:360] Setting OutFile to fd 1172 ...
	I0908 12:59:05.602011    6196 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:59:05.602011    6196 out.go:374] Setting ErrFile to fd 1612...
	I0908 12:59:05.602011    6196 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:59:05.624016    6196 out.go:368] Setting JSON to false
	I0908 12:59:05.630635    6196 start.go:130] hostinfo: {"hostname":"minikube6","uptime":305197,"bootTime":1757031148,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6282 Build 19045.6282","kernelVersion":"10.0.19045.6282 Build 19045.6282","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0908 12:59:05.630905    6196 start.go:138] gopshost.Virtualization returned error: not implemented yet
	I0908 12:59:05.637832    6196 out.go:179] * [NoKubernetes-208500] minikube v1.36.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
	I0908 12:59:05.642248    6196 notify.go:220] Checking for updates...
	I0908 12:59:05.645667    6196 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0908 12:59:05.650232    6196 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 12:59:05.655241    6196 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0908 12:59:05.660722    6196 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 12:59:05.667430    6196 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 12:59:05.671886    6196 config.go:182] Loaded profile config "ha-331000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:59:05.671886    6196 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 12:59:12.142997    6196 out.go:179] * Using the hyperv driver based on user configuration
	I0908 12:59:12.146165    6196 start.go:304] selected driver: hyperv
	I0908 12:59:12.146165    6196 start.go:918] validating driver "hyperv" against <nil>
	I0908 12:59:12.146710    6196 start.go:929] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 12:59:12.215210    6196 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 12:59:12.216509    6196 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 12:59:12.216509    6196 cni.go:84] Creating CNI manager for ""
	I0908 12:59:12.216509    6196 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0908 12:59:12.216509    6196 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0908 12:59:12.217218    6196 start.go:348] cluster config:
	{Name:NoKubernetes-208500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:NoKubernetes-208500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:59:12.217269    6196 iso.go:125] acquiring lock: {Name:mk0c8af595f03ef7f7ea249099688f084dfd77f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 12:59:12.221872    6196 out.go:179] * Starting "NoKubernetes-208500" primary control-plane node in "NoKubernetes-208500" cluster
	I0908 12:59:12.224162    6196 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0908 12:59:12.224419    6196 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0908 12:59:12.224502    6196 cache.go:58] Caching tarball of preloaded images
	I0908 12:59:12.224502    6196 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0908 12:59:12.224502    6196 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0908 12:59:12.225233    6196 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\NoKubernetes-208500\config.json ...
	I0908 12:59:12.225630    6196 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\NoKubernetes-208500\config.json: {Name:mk953a1adfb1364520738ae9b48eaa4becc2f5e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:59:12.226398    6196 start.go:360] acquireMachinesLock for NoKubernetes-208500: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0908 13:03:45.592036    6196 start.go:364] duration metric: took 4m33.3621012s to acquireMachinesLock for "NoKubernetes-208500"
	I0908 13:03:45.592176    6196 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-208500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.34.0 ClusterName:NoKubernetes-208500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disabl
eCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0908 13:03:45.592176    6196 start.go:125] createHost starting for "" (driver="hyperv")
	I0908 13:03:45.596288    6196 out.go:252] * Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0908 13:03:45.596446    6196 start.go:159] libmachine.API.Create for "NoKubernetes-208500" (driver="hyperv")
	I0908 13:03:45.596446    6196 client.go:168] LocalClient.Create starting
	I0908 13:03:45.596446    6196 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0908 13:03:45.597335    6196 main.go:141] libmachine: Decoding PEM data...
	I0908 13:03:45.597335    6196 main.go:141] libmachine: Parsing certificate...
	I0908 13:03:45.597335    6196 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0908 13:03:45.597335    6196 main.go:141] libmachine: Decoding PEM data...
	I0908 13:03:45.597335    6196 main.go:141] libmachine: Parsing certificate...
	I0908 13:03:45.597335    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0908 13:03:47.634401    6196 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0908 13:03:47.634401    6196 main.go:141] libmachine: [stderr =====>] : 
	I0908 13:03:47.634401    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0908 13:03:49.409053    6196 main.go:141] libmachine: [stdout =====>] : False
	
	I0908 13:03:49.409053    6196 main.go:141] libmachine: [stderr =====>] : 
	I0908 13:03:49.409950    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0908 13:03:51.050215    6196 main.go:141] libmachine: [stdout =====>] : True
	
	I0908 13:03:51.050847    6196 main.go:141] libmachine: [stderr =====>] : 
	I0908 13:03:51.050847    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0908 13:03:54.876711    6196 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0908 13:03:54.876925    6196 main.go:141] libmachine: [stderr =====>] : 
	I0908 13:03:54.878933    6196 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso...
	I0908 13:03:55.434211    6196 main.go:141] libmachine: Creating SSH key...
	I0908 13:03:56.067273    6196 main.go:141] libmachine: Creating VM...
	I0908 13:03:56.067273    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0908 13:03:59.188401    6196 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0908 13:03:59.188401    6196 main.go:141] libmachine: [stderr =====>] : 
	I0908 13:03:59.188526    6196 main.go:141] libmachine: Using switch "Default Switch"
	I0908 13:03:59.188731    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0908 13:04:01.066700    6196 main.go:141] libmachine: [stdout =====>] : True
	
	I0908 13:04:01.066700    6196 main.go:141] libmachine: [stderr =====>] : 
	I0908 13:04:01.066700    6196 main.go:141] libmachine: Creating VHD
	I0908 13:04:01.066700    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\NoKubernetes-208500\fixed.vhd' -SizeBytes 10MB -Fixed

                                                
                                                
** /stderr **
no_kubernetes_test.go:99: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-208500 --memory=3072 --alsologtostderr -v=5 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestNoKubernetes/serial/StartWithK8s]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-208500 -n NoKubernetes-208500
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-208500 -n NoKubernetes-208500: exit status 7 (3.0347524s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0908 13:04:08.081644    4412 main.go:137] libmachine: [stderr =====>] : Hyper-V\Get-VM : Hyper-V was unable to find a virtual machine with name "NoKubernetes-208500".
	At line:1 char:3
	+ ( Hyper-V\Get-VM NoKubernetes-208500 ).state
	+   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
	    + CategoryInfo          : InvalidArgument: (NoKubernetes-208500:String) [Get-VM], VirtualizationException
	    + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.GetVM
	 
	

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 7 (may be ok)
helpers_test.go:249: "NoKubernetes-208500" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (302.73s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (43.87s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-955700 --alsologtostderr -v=5
E0908 13:17:50.437825   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:132: (dbg) Non-zero exit: out/minikube-windows-amd64.exe delete -p pause-955700 --alsologtostderr -v=5: exit status 1 (38.3209759s)

                                                
                                                
-- stdout --
	* Stopping node "pause-955700"  ...
	* Powering off "pause-955700" via SSH ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 13:17:42.916836    3324 out.go:360] Setting OutFile to fd 1832 ...
	I0908 13:17:43.003540    3324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:17:43.003540    3324 out.go:374] Setting ErrFile to fd 1584...
	I0908 13:17:43.003540    3324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:17:43.024134    3324 out.go:368] Setting JSON to false
	I0908 13:17:43.039340    3324 cli_runner.go:164] Run: docker ps -a --filter label=name.minikube.sigs.k8s.io --format {{.Names}}
	I0908 13:17:43.153892    3324 config.go:182] Loaded profile config "cert-expiration-367100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 13:17:43.154389    3324 config.go:182] Loaded profile config "force-systemd-env-844300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 13:17:43.155002    3324 config.go:182] Loaded profile config "ha-331000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 13:17:43.155538    3324 config.go:182] Loaded profile config "kubernetes-upgrade-561700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0908 13:17:43.157175    3324 config.go:182] Loaded profile config "pause-955700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 13:17:43.158235    3324 config.go:182] Loaded profile config "pause-955700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 13:17:43.158472    3324 delete.go:301] DeleteProfiles
	I0908 13:17:43.158472    3324 delete.go:329] Deleting pause-955700
	I0908 13:17:43.158472    3324 delete.go:334] pause-955700 configuration: &{Name:pause-955700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.34.0 ClusterName:pause-955700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.53.226 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-inst
aller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:17:43.159079    3324 config.go:182] Loaded profile config "pause-955700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 13:17:43.159079    3324 config.go:182] Loaded profile config "pause-955700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 13:17:43.162227    3324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-955700 ).state
	I0908 13:17:45.473367    3324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 13:17:45.473450    3324 main.go:141] libmachine: [stderr =====>] : 
	I0908 13:17:45.473629    3324 stop.go:39] StopHost: pause-955700
	I0908 13:17:45.481966    3324 out.go:179] * Stopping node "pause-955700"  ...
	I0908 13:17:45.485550    3324 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0908 13:17:45.498499    3324 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0908 13:17:45.498499    3324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-955700 ).state
	I0908 13:17:47.877476    3324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 13:17:47.878062    3324 main.go:141] libmachine: [stderr =====>] : 
	I0908 13:17:47.878141    3324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-955700 ).networkadapters[0]).ipaddresses[0]
	I0908 13:17:50.584207    3324 main.go:141] libmachine: [stdout =====>] : 172.20.53.226
	
	I0908 13:17:50.584207    3324 main.go:141] libmachine: [stderr =====>] : 
	I0908 13:17:50.584949    3324 sshutil.go:53] new ssh client: &{IP:172.20.53.226 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\pause-955700\id_rsa Username:docker}
	I0908 13:17:50.700405    3324 ssh_runner.go:235] Completed: sudo mkdir -p /var/lib/minikube/backup: (5.2017292s)
	I0908 13:17:50.713208    3324 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0908 13:17:50.795968    3324 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0908 13:17:50.868690    3324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-955700 ).state
	I0908 13:17:53.177738    3324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 13:17:53.178477    3324 main.go:141] libmachine: [stderr =====>] : 
	W0908 13:17:53.178749    3324 register.go:133] "PowerOff" was not found within the registered steps for "Deleting": [Deleting Stopping Done Puring home dir]
	I0908 13:17:53.182331    3324 out.go:179] * Powering off "pause-955700" via SSH ...
	I0908 13:17:53.189203    3324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-955700 ).state
	I0908 13:17:55.650512    3324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 13:17:55.651306    3324 main.go:141] libmachine: [stderr =====>] : 
	I0908 13:17:55.651454    3324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-955700 ).networkadapters[0]).ipaddresses[0]
	I0908 13:17:58.811074    3324 main.go:141] libmachine: [stdout =====>] : 172.20.53.226
	
	I0908 13:17:58.811074    3324 main.go:141] libmachine: [stderr =====>] : 
	I0908 13:17:58.818595    3324 main.go:141] libmachine: Using SSH client type: native
	I0908 13:17:58.819352    3324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61ce0] 0xd64820 <nil>  [] 0s} 172.20.53.226 22 <nil> <nil>}
	I0908 13:17:58.819449    3324 main.go:141] libmachine: About to run SSH command:
	sudo poweroff
	I0908 13:17:59.023370    3324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 13:17:59.023470    3324 stop.go:100] poweroff result: out=, err=<nil>
	I0908 13:17:59.023470    3324 main.go:141] libmachine: Stopping "pause-955700"...
	I0908 13:17:59.023470    3324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-955700 ).state
	I0908 13:18:02.443298    3324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 13:18:02.443543    3324 main.go:141] libmachine: [stderr =====>] : 
	I0908 13:18:02.443543    3324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Stop-VM pause-955700
	I0908 13:18:18.521473    3324 main.go:141] libmachine: [stdout =====>] : 
	I0908 13:18:18.521473    3324 main.go:141] libmachine: [stderr =====>] : 
	I0908 13:18:18.522497    3324 main.go:141] libmachine: Waiting for host to stop...
	I0908 13:18:18.522550    3324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-955700 ).state
	I0908 13:18:20.807783    3324 main.go:141] libmachine: [stdout =====>] : Off
	
	I0908 13:18:20.808754    3324 main.go:141] libmachine: [stderr =====>] : 
	I0908 13:18:20.808824    3324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-955700 ).state

                                                
                                                
** /stderr **
pause_test.go:134: failed to delete minikube with args: "out/minikube-windows-amd64.exe delete -p pause-955700 --alsologtostderr -v=5" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/DeletePaused]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-955700 -n pause-955700
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-955700 -n pause-955700: exit status 7 (2.891135s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 7 (may be ok)
helpers_test.go:249: "pause-955700" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/DeletePaused]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-955700 -n pause-955700
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-955700 -n pause-955700: exit status 7 (2.6545587s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 7 (may be ok)
helpers_test.go:249: "pause-955700" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/DeletePaused (43.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (10800.498s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-707500 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperv --kubernetes-version=v1.28.0
panic: test timed out after 3h0m0s
	running tests:
		TestKubernetesUpgrade (18m41s)
		TestNetworkPlugins (17m30s)
		TestStartStop (30m37s)
		TestStartStop/group/no-preload (7m44s)
		TestStartStop/group/no-preload/serial (7m44s)
		TestStartStop/group/no-preload/serial/FirstStart (7m44s)
		TestStartStop/group/old-k8s-version (10m11s)
		TestStartStop/group/old-k8s-version/serial (10m11s)
		TestStartStop/group/old-k8s-version/serial/SecondStart (11s)
		TestStoppedBinaryUpgrade (15m4s)
		TestStoppedBinaryUpgrade/Upgrade (15m3s)

                                                
                                                
goroutine 2481 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2484 +0x394
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 8 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1753 +0x486
testing.tRunner(0xc00040f880, 0xc0007f1bc8)
	/usr/local/go/src/testing/testing.go:1798 +0x104
testing.runTests(0xc000780000, {0x5f81f20, 0x2b, 0x2b}, {0xffffffffffffffff?, 0xc000ad0000?, 0x5faa060?})
	/usr/local/go/src/testing/testing.go:2277 +0x4b4
testing.(*M).Run(0xc000512960)
	/usr/local/go/src/testing/testing.go:2142 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc000512960)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0xa8

                                                
                                                
goroutine 141 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x4433770, 0xc00040c070}, 0xc000735f50, 0xc000735f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x4433770, 0xc00040c070}, 0xc0?, 0xc000735f50, 0xc000735f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x4433770?, 0xc00040c070?}, 0x329cec0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xd9c025?, 0xc000646300?, 0xc0006c81c0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 153
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x286

                                                
                                                
goroutine 153 [chan receive, 171 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0xc0004a1320, 0xc00040c070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x295
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 151
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x614

                                                
                                                
goroutine 2168 [chan receive, 31 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1753 +0x486
testing.tRunner(0xc00150ca80, 0x40890c8)
	/usr/local/go/src/testing/testing.go:1798 +0x104
created by testing.(*T).Run in goroutine 2137
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 142 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 141
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2239 [chan receive, 17 minutes]:
testing.(*testState).waitParallel(0xc00091a140)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00197efc0)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00197efc0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00197efc0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc00197efc0, 0xc000093780)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2233
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 152 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x4445700, {{0x443abe8, 0xc0000d5e80?}, 0xc0015000f0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x378
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 151
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x272

                                                
                                                
goroutine 140 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001506750, 0x3b)
	/usr/local/go/src/runtime/sema.go:597 +0x15d
sync.(*Cond).Wait(0xc00142fce0?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x4448780)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0004a1320)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0xc2a27c?, 0x5ff88a0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x13
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x4433770?, 0xc00040c070?}, 0xc19de5?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x51
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x4433770, 0xc00040c070}, 0xc00142ff50, {0x43f2cc0, 0xc0015000c0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xe5
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x656b636f6422202b?, {0x43f2cc0?, 0xc0015000c0?}, 0x74?, 0x756b206e6f20646e?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x46
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000794020, 0x3b9aca00, 0x0, 0x1, 0xc00040c070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 153
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x1d9

                                                
                                                
goroutine 888 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x4433770, 0xc00040c070}, 0xc000731f50, 0xc000731f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x4433770, 0xc00040c070}, 0xa0?, 0xc000731f50, 0xc000731f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x4433770?, 0xc00040c070?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000731fd0?, 0xd9c084?, 0xc00040d5e0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 864
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x286

                                                
                                                
goroutine 2169 [chan receive, 10 minutes]:
testing.(*T).Run(0xc00150cc40, {0x36a0d97?, 0x0?}, 0xc0006a8200)
	/usr/local/go/src/testing/testing.go:1859 +0x414
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00150cc40)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0xad9
testing.tRunner(0xc00150cc40, 0xc001a64140)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2168
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2241 [chan receive, 17 minutes]:
testing.(*testState).waitParallel(0xc00091a140)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00197f340)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00197f340)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00197f340)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc00197f340, 0xc000093880)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2233
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2477 [syscall]:
syscall.Syscall(0xc000b61be0?, 0x0?, 0xd5f83b?, 0x1000000000000?, 0x1e?)
	/usr/local/go/src/runtime/syscall_windows.go:457 +0x29
syscall.WaitForSingleObject(0x760, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1149 +0x5a
os.(*Process).wait(0xc001462300?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc001462300)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc001462300)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc0013f8fc0, 0xc001462300)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x44333e0, 0xc00068a1c0}, 0xc0013f8fc0, {0xc00148e000, 0x16}, {0x7ffdbac95f50?, 0xc000b61f60?}, {0xd62053?, 0x2cb704e2359?}, {0xc000164200, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:254 +0xc8
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0013f8fc0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x66
testing.tRunner(0xc0013f8fc0, 0xc0006ca200)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2377
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2366 [syscall]:
syscall.Syscall6(0x2f67c2429a8?, 0x2f636900a38?, 0x4000?, 0xc0014cc008?, 0xc000b22000?, 0xc001543bf0?, 0xc77f79?, 0xc001543c18?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x698, {0xc000b25b49?, 0x4b7, 0xcce17f?}, 0x4000?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:454
syscall.Read(0xc00154b688?, {0xc000b25b49?, 0x0?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:433 +0x2d
internal/poll.(*FD).Read(0xc00154b688, {0xc000b25b49, 0x4b7, 0x4b7})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00039c098, {0xc000b25b49?, 0x112c?, 0x112c?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc001750300, {0x43f11e0, 0xc00068c0c0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x43f1360, 0xc001750300}, {0x43f11e0, 0xc00068c0c0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001543e90?, {0x43f1360, 0xc001750300})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc001543f38?, {0x43f1360?, 0xc001750300?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x43f1360, 0xc001750300}, {0x43f12c0, 0xc00039c098}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc0014645b0?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2355
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 2235 [chan receive, 17 minutes]:
testing.(*testState).waitParallel(0xc00091a140)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00150ddc0)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00150ddc0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00150ddc0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc00150ddc0, 0xc000093580)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2233
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2355 [syscall, 6 minutes]:
syscall.Syscall(0xc0007bdaa8?, 0x0?, 0xd5f83b?, 0x1000000000000?, 0x1e?)
	/usr/local/go/src/runtime/syscall_windows.go:457 +0x29
syscall.WaitForSingleObject(0x564, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1149 +0x5a
os.(*Process).wait(0xc00146a480?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc00146a480)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc00146a480)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc0013f8700, 0xc00146a480)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade.func2(0xc0013f8700)
	/home/jenkins/workspace/Build_Cross/test/integration/version_upgrade_test.go:198 +0x6f5
testing.tRunner(0xc0013f8700, 0xc001506c40)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2140
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2480 [select]:
os/exec.(*Cmd).watchCtx(0xc001462300, 0xc0014653b0)
	/usr/local/go/src/os/exec/exec.go:789 +0xb2
created by os/exec.(*Cmd).Start in goroutine 2477
	/usr/local/go/src/os/exec/exec.go:775 +0x989

                                                
                                                
goroutine 2413 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x4445700, {{0x443abe8, 0xc0000d5e80?}, 0xc000c29680?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x378
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2385
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x272

                                                
                                                
goroutine 2377 [chan receive]:
testing.(*T).Run(0xc001abafc0, {0x36ac24d?, 0xc1939e?}, 0xc0006ca200)
	/usr/local/go/src/testing/testing.go:1859 +0x414
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001abafc0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x2af
testing.tRunner(0xc001abafc0, 0xc0006a8200)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2169
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 863 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x4445700, {{0x443abe8, 0xc0000d5e80?}, 0xc001d40b60?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x378
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 856
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x272

                                                
                                                
goroutine 2171 [chan receive, 31 minutes]:
testing.(*testState).waitParallel(0xc00091a140)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00150cfc0)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00150cfc0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00150cfc0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:92 +0x45
testing.tRunner(0xc00150cfc0, 0xc001a64240)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2168
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2236 [chan receive, 17 minutes]:
testing.(*testState).waitParallel(0xc00091a140)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00037ce00)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00037ce00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00037ce00)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc00037ce00, 0xc000093600)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2233
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2365 [syscall, 6 minutes]:
syscall.Syscall6(0x2f67c240428?, 0x2f636900a38?, 0x800?, 0xc0014cd008?, 0xc00158c800?, 0xc001577bf0?, 0xc77f79?, 0xc001736000?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x3d4, {0xc00158ca6f?, 0x591, 0xcce17f?}, 0x800?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:454
syscall.Read(0xc00154afc8?, {0xc00158ca6f?, 0x0?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:433 +0x2d
internal/poll.(*FD).Read(0xc00154afc8, {0xc00158ca6f, 0x591, 0x591})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00039c080, {0xc00158ca6f?, 0xc16dff?, 0x31570c0?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc0017502d0, {0x43f11e0, 0xc000b00030})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x43f1360, 0xc0017502d0}, {0x43f11e0, 0xc000b00030}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x43f1360, 0xc0017502d0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc001577eb0?, {0x43f1360?, 0xc0017502d0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x43f1360, 0xc0017502d0}, {0x43f12c0, 0xc00039c080}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc0014643f0?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2355
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 2237 [chan receive, 17 minutes]:
testing.(*testState).waitParallel(0xc00091a140)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00197ec40)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00197ec40)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00197ec40)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc00197ec40, 0xc000093680)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2233
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 864 [chan receive, 148 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0xc001e84600, 0xc00040c070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x295
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 856
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x614

                                                
                                                
goroutine 762 [IO wait, 161 minutes]:
internal/poll.runtime_pollWait(0x2f67c28d8b8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xccce13?, 0x0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc0006482a0, 0xc001419ba0)
	/usr/local/go/src/internal/poll/fd_windows.go:177 +0x105
internal/poll.(*FD).acceptOne(0xc000648288, 0x3f4, {0xc00037ea50?, 0xc001419c00?, 0xcd7545?}, 0xc001419c34?)
	/usr/local/go/src/internal/poll/fd_windows.go:946 +0x65
internal/poll.(*FD).Accept(0xc000648288, 0xc001419d80)
	/usr/local/go/src/internal/poll/fd_windows.go:980 +0x1b6
net.(*netFD).accept(0xc000648288)
	/usr/local/go/src/net/fd_windows.go:182 +0x4b
net.(*TCPListener).accept(0xc000ad6980)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1b
net.(*TCPListener).Accept(0xc000ad6980)
	/usr/local/go/src/net/tcpsock.go:380 +0x30
net/http.(*Server).Serve(0xc00049f500, {0x4420e50, 0xc000ad6980})
	/usr/local/go/src/net/http/server.go:3424 +0x30c
net/http.(*Server).ListenAndServe(0xc00049f500)
	/usr/local/go/src/net/http/server.go:3350 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 759
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x129

                                                
                                                
goroutine 2367 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc00146a480, 0xc0006d0620)
	/usr/local/go/src/os/exec/exec.go:789 +0xb2
created by os/exec.(*Cmd).Start in goroutine 2355
	/usr/local/go/src/os/exec/exec.go:775 +0x989

                                                
                                                
goroutine 887 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc001581410, 0x35)
	/usr/local/go/src/runtime/sema.go:597 +0x15d
sync.(*Cond).Wait(0xc00140bce0?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x4448780)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001e84600)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0xc001447ea8?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x13
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x4433770?, 0xc00040c070?}, 0xc19d34?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x51
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x4433770, 0xc00040c070}, 0xc00140bf50, {0x43f2cc0, 0xc001500300}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xe5
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x43f2cc0?, 0xc001500300?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x46
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000894b30, 0x3b9aca00, 0x0, 0x1, 0xc00040c070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 864
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x1d9

                                                
                                                
goroutine 2172 [chan receive, 8 minutes]:
testing.(*T).Run(0xc00150d180, {0x36a0d97?, 0x0?}, 0xc000896200)
	/usr/local/go/src/testing/testing.go:1859 +0x414
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00150d180)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0xad9
testing.tRunner(0xc00150d180, 0xc001a64280)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2168
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2137 [chan receive, 31 minutes]:
testing.(*T).Run(0xc000586a80, {0x369f9c9?, 0xd62053?}, 0x40890c8)
	/usr/local/go/src/testing/testing.go:1859 +0x414
k8s.io/minikube/test/integration.TestStartStop(0xc000586a80)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc000586a80, 0x4088ee8)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2173 [chan receive, 31 minutes]:
testing.(*testState).waitParallel(0xc00091a140)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00150d340)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00150d340)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00150d340)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:92 +0x45
testing.tRunner(0xc00150d340, 0xc001a642c0)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2168
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2141 [syscall, 4 minutes]:
syscall.Syscall(0xc0007c1988?, 0x0?, 0xd5f83b?, 0x1000000000000?, 0x1e?)
	/usr/local/go/src/runtime/syscall_windows.go:457 +0x29
syscall.WaitForSingleObject(0x75c, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1149 +0x5a
os.(*Process).wait(0xc00146a600?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc00146a600)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc00146a600)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc00197ea80, 0xc00146a600)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc00197ea80)
	/home/jenkins/workspace/Build_Cross/test/integration/version_upgrade_test.go:275 +0x1425
testing.tRunner(0xc00197ea80, 0x4088e68)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2140 [chan receive, 16 minutes]:
testing.(*T).Run(0xc000587dc0, {0x36a36d6?, 0x3005753e800?}, 0xc001506c40)
	/usr/local/go/src/testing/testing.go:1859 +0x414
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc000587dc0)
	/home/jenkins/workspace/Build_Cross/test/integration/version_upgrade_test.go:160 +0x2ab
testing.tRunner(0xc000587dc0, 0x4088ef0)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2234 [chan receive, 17 minutes]:
testing.(*testState).waitParallel(0xc00091a140)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00150dc00)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00150dc00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00150dc00)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc00150dc00, 0xc000093500)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2233
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 889 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 888
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2242 [chan receive, 17 minutes]:
testing.(*testState).waitParallel(0xc00091a140)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00197f500)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00197f500)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00197f500)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc00197f500, 0xc000093900)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2233
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 1131 [chan send, 143 minutes]:
os/exec.(*Cmd).watchCtx(0xc001a37080, 0xc001a62b60)
	/usr/local/go/src/os/exec/exec.go:814 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 870
	/usr/local/go/src/os/exec/exec.go:775 +0x989

                                                
                                                
goroutine 2174 [chan receive, 31 minutes]:
testing.(*testState).waitParallel(0xc00091a140)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00150d500)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00150d500)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00150d500)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:92 +0x45
testing.tRunner(0xc00150d500, 0xc001a64340)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2168
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2437 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000888690, 0x0)
	/usr/local/go/src/runtime/sema.go:597 +0x15d
sync.(*Cond).Wait(0xc0013afce0?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x4448780)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001ad5860)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0xc2a27c?, 0x5ff88a0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x13
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x4433770?, 0xc00040c070?}, 0xc19de5?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x51
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x4433770, 0xc00040c070}, 0xc0013aff50, {0x43f2cc0, 0xc00081f020}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xe5
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x43f2cc0?, 0xc00081f020?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x46
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0017f8290, 0x3b9aca00, 0x0, 0x1, 0xc00040c070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 2414
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x1d9

                                                
                                                
goroutine 2233 [chan receive, 17 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1753 +0x486
testing.tRunner(0xc00150d880, 0xc000428f90)
	/usr/local/go/src/testing/testing.go:1798 +0x104
created by testing.(*T).Run in goroutine 2067
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2238 [chan receive, 17 minutes]:
testing.(*testState).waitParallel(0xc00091a140)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00197ee00)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00197ee00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00197ee00)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc00197ee00, 0xc000093700)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2233
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2240 [chan receive, 17 minutes]:
testing.(*testState).waitParallel(0xc00091a140)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00197f180)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00197f180)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00197f180)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc00197f180, 0xc000093800)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2233
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2170 [chan receive, 31 minutes]:
testing.(*testState).waitParallel(0xc00091a140)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00150ce00)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00150ce00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00150ce00)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:92 +0x45
testing.tRunner(0xc00150ce00, 0xc001a64180)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2168
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2384 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc00146a600, 0xc0006d0f50)
	/usr/local/go/src/os/exec/exec.go:789 +0xb2
created by os/exec.(*Cmd).Start in goroutine 2141
	/usr/local/go/src/os/exec/exec.go:775 +0x989

                                                
                                                
goroutine 2067 [chan receive, 19 minutes]:
testing.(*T).Run(0xc00150c380, {0x369f9c9?, 0xc0015eff60?}, 0xc000428f90)
	/usr/local/go/src/testing/testing.go:1859 +0x414
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc00150c380)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:52 +0xd3
testing.tRunner(0xc00150c380, 0x4088ea0)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2478 [syscall]:
syscall.Syscall6(0x2f67bdc8ff8?, 0x2f636900ed0?, 0x800?, 0xc0000d8008?, 0xc001755000?, 0xc0013adbf0?, 0xc77f79?, 0xc2a27c?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x734, {0xc00175526f?, 0x591, 0xcce17f?}, 0x800?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:454
syscall.Read(0xc000c1fd48?, {0xc00175526f?, 0x0?, 0xc00217c300?})
	/usr/local/go/src/syscall/syscall_windows.go:433 +0x2d
internal/poll.(*FD).Read(0xc000c1fd48, {0xc00175526f, 0x591, 0x591})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000c6ae0, {0xc00175526f?, 0xc16dff?, 0x31570c0?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc0018909f0, {0x43f11e0, 0xc000846040})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x43f1360, 0xc0018909f0}, {0x43f11e0, 0xc000846040}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00147de18?, {0x43f1360, 0xc0018909f0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc0013adf38?, {0x43f1360?, 0xc0018909f0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x43f1360, 0xc0018909f0}, {0x43f12c0, 0xc0000c6ae0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc001464770?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2477
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 2439 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2438
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2382 [syscall, 4 minutes]:
syscall.Syscall6(0x2f67bdc8ff8?, 0x2f636900ed0?, 0x800?, 0xc000800808?, 0xc001754000?, 0xc001a91bf0?, 0xc77f79?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x6d8, {0xc00175420e?, 0x5f2, 0xcce17f?}, 0x800?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:454
syscall.Read(0xc00154ad88?, {0xc00175420e?, 0x0?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:433 +0x2d
internal/poll.(*FD).Read(0xc00154ad88, {0xc00175420e, 0x5f2, 0x5f2})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00039c060, {0xc00175420e?, 0xc16dff?, 0x31570c0?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc001751ef0, {0x43f11e0, 0xc00068c120})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x43f1360, 0xc001751ef0}, {0x43f11e0, 0xc00068c120}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x43f1360, 0xc001751ef0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc001a91eb0?, {0x43f1360?, 0xc001751ef0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x43f1360, 0xc001751ef0}, {0x43f12c0, 0xc00039c060}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0x0?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2141
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 2479 [syscall]:
syscall.Syscall6(0x2f67c1897b8?, 0x2f636900ed0?, 0x4000?, 0xc00085b008?, 0xc000b48000?, 0xc000aefbf0?, 0xc77f79?, 0xc00052ac60?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x640, {0xc000b4a392?, 0x1c6e, 0xcce17f?}, 0x4000?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:454
syscall.Read(0xc00146cfc8?, {0xc000b4a392?, 0x0?, 0xc000657320?})
	/usr/local/go/src/syscall/syscall_windows.go:433 +0x2d
internal/poll.(*FD).Read(0xc00146cfc8, {0xc000b4a392, 0x1c6e, 0x1c6e})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000c6af8, {0xc000b4a392?, 0xc16dff?, 0x31570c0?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc001890a20, {0x43f11e0, 0xc000912028})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x43f1360, 0xc001890a20}, {0x43f11e0, 0xc000912028}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x43f1360, 0xc001890a20})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xd1e16e?, {0x43f1360?, 0xc001890a20?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x43f1360, 0xc001890a20}, {0x43f12c0, 0xc0000c6af8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc000aeffa8?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2477
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 2383 [syscall, 4 minutes]:
syscall.Syscall6(0x2f67c200be8?, 0x2f636900ed0?, 0x4000?, 0x5fac740?, 0xc000b70000?, 0xc000aedbf0?, 0xc77f79?, 0xc001480008?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x6e4, {0xc000b720a1?, 0x1f5f, 0xcce17f?}, 0x4000?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:454
syscall.Read(0xc00154b448?, {0xc000b720a1?, 0x0?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:433 +0x2d
internal/poll.(*FD).Read(0xc00154b448, {0xc000b720a1, 0x1f5f, 0x1f5f})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00039c078, {0xc000b720a1?, 0x3b7?, 0x3b7?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc001751f20, {0x43f11e0, 0xc000b00018})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x43f1360, 0xc001751f20}, {0x43f11e0, 0xc000b00018}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x43f1360, 0xc001751f20})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc000aedeb0?, {0x43f1360?, 0xc001751f20?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x43f1360, 0xc001751f20}, {0x43f12c0, 0xc00039c078}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc00155a008?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2141
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 2438 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x4433770, 0xc00040c070}, 0xc0016adf50, 0xc0016adf98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x4433770, 0xc00040c070}, 0x70?, 0xc0016adf50, 0xc0016adf98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x4433770?, 0xc00040c070?}, 0x3049090a6472656e?, 0x313a333120383039?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xd9c025?, 0xc001462900?, 0xc00040dc70?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 2414
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x286

                                                
                                                
goroutine 2409 [syscall, 8 minutes]:
syscall.Syscall(0xc0016afc10?, 0x0?, 0xd5f83b?, 0x1000000000000?, 0x1e?)
	/usr/local/go/src/runtime/syscall_windows.go:457 +0x29
syscall.WaitForSingleObject(0x594, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1149 +0x5a
os.(*Process).wait(0xc000646300?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc000646300)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc000646300)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc0016e4c40, 0xc000646300)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateFirstStart({0x44333e0?, 0xc0004268c0?}, 0xc0016e4c40, {0xc001ba6a20?, 0xc7a291?}, {0x7ffdbac95f50?, 0xc0016aff60?}, {0xd62053?, 0x2ca62754bd8?}, {0xc00049eb00, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:184 +0xc5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0016e4c40)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x66
testing.tRunner(0xc0016e4c40, 0xc000896280)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2408
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2408 [chan receive, 8 minutes]:
testing.(*T).Run(0xc0016e4a80, {0x36aa22c?, 0xc84380?}, 0xc000896280)
	/usr/local/go/src/testing/testing.go:1859 +0x414
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0016e4a80)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x2af
testing.tRunner(0xc0016e4a80, 0xc000896200)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2172
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2410 [syscall, 8 minutes]:
syscall.Syscall6(0x2f67bdbffc8?, 0x2f6369005a0?, 0x400?, 0xc00085b808?, 0xc001776400?, 0xc000b53bf0?, 0xc77f79?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x678, {0xc0017765f8?, 0x208, 0xcce17f?}, 0x400?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:454
syscall.Read(0xc00146c908?, {0xc0017765f8?, 0x0?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:433 +0x2d
internal/poll.(*FD).Read(0xc00146c908, {0xc0017765f8, 0x208, 0x208})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00039c0c0, {0xc0017765f8?, 0xc16dff?, 0x31570c0?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc001778d80, {0x43f11e0, 0xc00068c168})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x43f1360, 0xc001778d80}, {0x43f11e0, 0xc00068c168}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x43f1360, 0xc001778d80})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x0?, {0x43f1360?, 0xc001778d80?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x43f1360, 0xc001778d80}, {0x43f12c0, 0xc00039c0c0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0x0?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2409
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 2411 [syscall]:
syscall.Syscall6(0x2f636900ed0?, 0x20000?, 0xc00085b008?, 0xc0013d6000?, 0xc00085b008?, 0xc000b55bf0?, 0xc77f85?, 0x6567616d695c6568?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x634, {0xc0013ee8f9?, 0x7707, 0xcce17f?}, 0x20000?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:454
syscall.Read(0xc00146cd88?, {0xc0013ee8f9?, 0x0?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:433 +0x2d
internal/poll.(*FD).Read(0xc00146cd88, {0xc0013ee8f9, 0x7707, 0x7707})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00039c0e0, {0xc0013ee8f9?, 0xd51?, 0xd51?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc001778db0, {0x43f11e0, 0xc000b000a8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x43f1360, 0xc001778db0}, {0x43f11e0, 0xc000b000a8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x43f1360, 0xc001778db0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x0?, {0x43f1360?, 0xc001778db0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x43f1360, 0xc001778db0}, {0x43f12c0, 0xc00039c0e0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0x0?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2409
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 2412 [select, 8 minutes]:
os/exec.(*Cmd).watchCtx(0xc000646300, 0xc00040d2d0)
	/usr/local/go/src/os/exec/exec.go:789 +0xb2
created by os/exec.(*Cmd).Start in goroutine 2409
	/usr/local/go/src/os/exec/exec.go:775 +0x989

                                                
                                                
goroutine 2414 [chan receive, 2 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0xc001ad5860, 0xc00040c070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x295
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2385
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x614

                                                
                                    

Test pass (164/208)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 16.53
4 TestDownloadOnly/v1.28.0/preload-exists 0.07
7 TestDownloadOnly/v1.28.0/kubectl 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.29
9 TestDownloadOnly/v1.28.0/DeleteAll 0.96
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.65
12 TestDownloadOnly/v1.34.0/json-events 11.46
13 TestDownloadOnly/v1.34.0/preload-exists 0
16 TestDownloadOnly/v1.34.0/kubectl 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.28
18 TestDownloadOnly/v1.34.0/DeleteAll 0.89
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.69
21 TestBinaryMirror 6.97
22 TestOffline 411.2
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.33
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.33
27 TestAddons/Setup 491.82
29 TestAddons/serial/Volcano 65.58
31 TestAddons/serial/GCPAuth/Namespaces 0.34
32 TestAddons/serial/GCPAuth/FakeCredentials 11.56
35 TestAddons/parallel/Registry 35.58
36 TestAddons/parallel/RegistryCreds 15.84
37 TestAddons/parallel/Ingress 67.16
38 TestAddons/parallel/InspektorGadget 13.83
39 TestAddons/parallel/MetricsServer 21.23
41 TestAddons/parallel/CSI 91.95
42 TestAddons/parallel/Headlamp 43.01
43 TestAddons/parallel/CloudSpanner 22
44 TestAddons/parallel/LocalPath 84.89
45 TestAddons/parallel/NvidiaDevicePlugin 22.34
46 TestAddons/parallel/Yakd 26.68
48 TestAddons/StoppedEnableDisable 54.49
49 TestCertOptions 451.95
50 TestCertExpiration 882.71
51 TestDockerFlags 560.98
52 TestForceSystemdFlag 256.08
53 TestForceSystemdEnv 424.37
60 TestErrorSpam/start 16.9
61 TestErrorSpam/status 36.18
62 TestErrorSpam/pause 22.54
63 TestErrorSpam/unpause 22.7
64 TestErrorSpam/stop 61.26
67 TestFunctional/serial/CopySyncFile 0.04
68 TestFunctional/serial/StartWithProxy 220.28
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 142.16
71 TestFunctional/serial/KubeContext 0.13
72 TestFunctional/serial/KubectlGetPods 0.24
75 TestFunctional/serial/CacheCmd/cache/add_remote 33.06
76 TestFunctional/serial/CacheCmd/cache/add_local 12.99
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.28
78 TestFunctional/serial/CacheCmd/cache/list 0.29
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 9.56
80 TestFunctional/serial/CacheCmd/cache/cache_reload 38.44
81 TestFunctional/serial/CacheCmd/cache/delete 0.58
82 TestFunctional/serial/MinikubeKubectlCmd 0.52
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 3.1
84 TestFunctional/serial/ExtraConfig 133.49
85 TestFunctional/serial/ComponentHealth 0.18
86 TestFunctional/serial/LogsCmd 8.46
87 TestFunctional/serial/LogsFileCmd 10.52
88 TestFunctional/serial/InvalidService 20.76
90 TestFunctional/parallel/ConfigCmd 1.7
94 TestFunctional/parallel/StatusCmd 41.21
98 TestFunctional/parallel/ServiceCmdConnect 27.09
99 TestFunctional/parallel/AddonsCmd 0.67
100 TestFunctional/parallel/PersistentVolumeClaim 39.94
102 TestFunctional/parallel/SSHCmd 23.01
103 TestFunctional/parallel/CpCmd 60.06
104 TestFunctional/parallel/MySQL 58.81
105 TestFunctional/parallel/FileSync 10.2
106 TestFunctional/parallel/CertSync 61.56
110 TestFunctional/parallel/NodeLabels 0.19
112 TestFunctional/parallel/NonActiveRuntimeDisabled 10.3
114 TestFunctional/parallel/License 1.76
115 TestFunctional/parallel/ServiceCmd/DeployApp 10.45
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 9.39
118 TestFunctional/parallel/ServiceCmd/List 14.76
119 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
121 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 15.78
122 TestFunctional/parallel/ServiceCmd/JSONOutput 13.6
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.21
131 TestFunctional/parallel/ProfileCmd/profile_not_create 14.12
133 TestFunctional/parallel/ProfileCmd/profile_list 15.7
134 TestFunctional/parallel/ProfileCmd/profile_json_output 13.62
135 TestFunctional/parallel/DockerEnv/powershell 43.77
136 TestFunctional/parallel/UpdateContextCmd/no_changes 2.79
137 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.85
138 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.48
139 TestFunctional/parallel/Version/short 0.71
140 TestFunctional/parallel/Version/components 8.22
141 TestFunctional/parallel/ImageCommands/ImageListShort 8.01
142 TestFunctional/parallel/ImageCommands/ImageListTable 7.84
143 TestFunctional/parallel/ImageCommands/ImageListJson 7.8
144 TestFunctional/parallel/ImageCommands/ImageListYaml 8.07
145 TestFunctional/parallel/ImageCommands/ImageBuild 28.79
146 TestFunctional/parallel/ImageCommands/Setup 2.38
147 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 19.23
148 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 18.93
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 18.3
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 7.72
151 TestFunctional/parallel/ImageCommands/ImageRemove 14.66
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 14.53
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 7.57
154 TestFunctional/delete_echo-server_images 0.21
155 TestFunctional/delete_my-image_image 0.08
156 TestFunctional/delete_minikube_cached_images 0.1
161 TestMultiControlPlane/serial/StartCluster 725.9
162 TestMultiControlPlane/serial/DeployApp 12.66
164 TestMultiControlPlane/serial/AddWorkerNode 251.68
165 TestMultiControlPlane/serial/NodeLabels 0.29
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 48.21
167 TestMultiControlPlane/serial/CopyFile 631.29
171 TestImageBuild/serial/Setup 190.73
172 TestImageBuild/serial/NormalBuild 10.69
173 TestImageBuild/serial/BuildWithBuildArg 8.97
174 TestImageBuild/serial/BuildWithDockerIgnore 8.22
175 TestImageBuild/serial/BuildWithSpecifiedDockerfile 8.29
179 TestJSONOutput/start/Command 226.19
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 7.99
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 7.7
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 39.22
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.94
207 TestMainNoArgs 0.23
208 TestMinikubeProfile 522.69
211 TestMountStart/serial/StartWithMountFirst 155.51
212 TestMountStart/serial/VerifyMountFirst 9.49
213 TestMountStart/serial/StartWithMountSecond 155.87
214 TestMountStart/serial/VerifyMountSecond 9.44
215 TestMountStart/serial/DeleteFirst 30.78
216 TestMountStart/serial/VerifyMountPostDelete 9.33
217 TestMountStart/serial/Stop 28.29
218 TestMountStart/serial/RestartStopped 114.78
219 TestMountStart/serial/VerifyMountPostStop 9.31
222 TestMultiNode/serial/FreshStart2Nodes 432.77
223 TestMultiNode/serial/DeployApp2Nodes 9.82
225 TestMultiNode/serial/AddNode 238.61
226 TestMultiNode/serial/MultiNodeLabels 0.18
227 TestMultiNode/serial/ProfileList 35.46
228 TestMultiNode/serial/CopyFile 355.62
229 TestMultiNode/serial/StopNode 77
230 TestMultiNode/serial/StartAfterStop 189.34
235 TestPreload 484.32
236 TestScheduledStopWindows 323.88
241 TestRunningBinaryUpgrade 1042.82
246 TestNoKubernetes/serial/StartNoK8sWithVersion 0.39
256 TestPause/serial/Start 409.91
257 TestPause/serial/SecondStartNoReconfiguration 413.23
269 TestPause/serial/Pause 8.44
270 TestPause/serial/VerifyStatus 12.7
271 TestPause/serial/Unpause 8.38
272 TestPause/serial/PauseAgain 9.03
x
+
TestDownloadOnly/v1.28.0/json-events (16.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-357600 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-357600 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=hyperv: (16.5264177s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (16.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0908 10:34:14.412215   11628 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
I0908 10:34:14.481490   11628 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
--- PASS: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-357600
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-357600: exit status 85 (284.8123ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬───────────────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                       ARGS                                                                        │       PROFILE        │       USER        │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼───────────────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-357600 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=hyperv │ download-only-357600 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 10:33 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴───────────────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 10:33:58
	Running on machine: minikube6
	Binary: Built with gc go1.24.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 10:33:57.986413    7208 out.go:360] Setting OutFile to fd 668 ...
	I0908 10:33:58.065912    7208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 10:33:58.065912    7208 out.go:374] Setting ErrFile to fd 672...
	I0908 10:33:58.065912    7208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W0908 10:33:58.081678    7208 root.go:314] Error reading config file at C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0908 10:33:58.090933    7208 out.go:368] Setting JSON to true
	I0908 10:33:58.095451    7208 start.go:130] hostinfo: {"hostname":"minikube6","uptime":296489,"bootTime":1757031148,"procs":180,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6282 Build 19045.6282","kernelVersion":"10.0.19045.6282 Build 19045.6282","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0908 10:33:58.095521    7208 start.go:138] gopshost.Virtualization returned error: not implemented yet
	I0908 10:33:58.102506    7208 out.go:99] [download-only-357600] minikube v1.36.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
	I0908 10:33:58.102506    7208 notify.go:220] Checking for updates...
	W0908 10:33:58.102506    7208 preload.go:293] Failed to list preload files: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0908 10:33:58.104690    7208 out.go:171] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0908 10:33:58.108511    7208 out.go:171] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0908 10:33:58.111510    7208 out.go:171] MINIKUBE_LOCATION=21512
	I0908 10:33:58.114572    7208 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0908 10:33:58.120972    7208 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 10:33:58.121948    7208 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 10:34:03.415875    7208 out.go:99] Using the hyperv driver based on user configuration
	I0908 10:34:03.416132    7208 start.go:304] selected driver: hyperv
	I0908 10:34:03.416282    7208 start.go:918] validating driver "hyperv" against <nil>
	I0908 10:34:03.416780    7208 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 10:34:03.473921    7208 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=65534MB, container=0MB
	I0908 10:34:03.475210    7208 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 10:34:03.475210    7208 cni.go:84] Creating CNI manager for ""
	I0908 10:34:03.475883    7208 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0908 10:34:03.475961    7208 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0908 10:34:03.475961    7208 start.go:348] cluster config:
	{Name:download-only-357600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:6144 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-357600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 10:34:03.476668    7208 iso.go:125] acquiring lock: {Name:mk0c8af595f03ef7f7ea249099688f084dfd77f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 10:34:03.479875    7208 out.go:99] Downloading VM boot image ...
	I0908 10:34:03.479875    7208 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.36.0-1756980912-21488-amd64.iso
	I0908 10:34:07.596204    7208 out.go:99] Starting "download-only-357600" primary control-plane node in "download-only-357600" cluster
	I0908 10:34:07.596204    7208 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0908 10:34:07.646304    7208 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I0908 10:34:07.646304    7208 cache.go:58] Caching tarball of preloaded images
	I0908 10:34:07.647038    7208 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0908 10:34:07.650678    7208 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0908 10:34:07.650723    7208 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 ...
	I0908 10:34:07.725312    7208 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8a955be835827bc584bcce0658a7fcc9 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I0908 10:34:11.310846    7208 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 ...
	I0908 10:34:11.312847    7208 preload.go:254] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 ...
	I0908 10:34:12.328714    7208 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on docker
	I0908 10:34:12.329099    7208 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-357600\config.json ...
	I0908 10:34:12.329788    7208 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-357600\config.json: {Name:mkaced7f70245acdae3006141cee249f848ad01d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 10:34:12.330991    7208 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0908 10:34:12.332989    7208 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\windows\amd64\v1.28.0/kubectl.exe
	
	
	* The control-plane node download-only-357600 host does not exist
	  To start a cluster, run: "minikube start -p download-only-357600"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-357600
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (11.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-816200 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-816200 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=hyperv: (11.4626646s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (11.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0908 10:34:27.851625   11628 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
I0908 10:34:27.852342   11628 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
--- PASS: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-816200
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-816200: exit status 85 (282.6411ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │       PROFILE        │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-357600 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=hyperv │ download-only-357600 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 10:33 UTC │                     │
	│ delete  │ --all                                                                                                                                             │ minikube             │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 10:34 UTC │ 08 Sep 25 10:34 UTC │
	│ delete  │ -p download-only-357600                                                                                                                           │ download-only-357600 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 10:34 UTC │ 08 Sep 25 10:34 UTC │
	│ start   │ -o=json --download-only -p download-only-816200 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=hyperv │ download-only-816200 │ minikube6\jenkins │ v1.36.0 │ 08 Sep 25 10:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 10:34:16
	Running on machine: minikube6
	Binary: Built with gc go1.24.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 10:34:16.497327    5212 out.go:360] Setting OutFile to fd 688 ...
	I0908 10:34:16.570511    5212 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 10:34:16.570511    5212 out.go:374] Setting ErrFile to fd 684...
	I0908 10:34:16.570601    5212 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 10:34:16.589544    5212 out.go:368] Setting JSON to true
	I0908 10:34:16.593891    5212 start.go:130] hostinfo: {"hostname":"minikube6","uptime":296508,"bootTime":1757031148,"procs":180,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6282 Build 19045.6282","kernelVersion":"10.0.19045.6282 Build 19045.6282","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0908 10:34:16.593891    5212 start.go:138] gopshost.Virtualization returned error: not implemented yet
	I0908 10:34:16.804111    5212 out.go:99] [download-only-816200] minikube v1.36.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
	I0908 10:34:16.804753    5212 notify.go:220] Checking for updates...
	I0908 10:34:16.808159    5212 out.go:171] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0908 10:34:16.811764    5212 out.go:171] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0908 10:34:16.814816    5212 out.go:171] MINIKUBE_LOCATION=21512
	I0908 10:34:16.818031    5212 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0908 10:34:16.824632    5212 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 10:34:16.825902    5212 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 10:34:22.143280    5212 out.go:99] Using the hyperv driver based on user configuration
	I0908 10:34:22.143280    5212 start.go:304] selected driver: hyperv
	I0908 10:34:22.143498    5212 start.go:918] validating driver "hyperv" against <nil>
	I0908 10:34:22.144013    5212 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 10:34:22.195546    5212 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=65534MB, container=0MB
	I0908 10:34:22.195922    5212 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 10:34:22.195922    5212 cni.go:84] Creating CNI manager for ""
	I0908 10:34:22.196957    5212 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0908 10:34:22.197058    5212 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0908 10:34:22.197294    5212 start.go:348] cluster config:
	{Name:download-only-816200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:6144 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-816200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 10:34:22.197294    5212 iso.go:125] acquiring lock: {Name:mk0c8af595f03ef7f7ea249099688f084dfd77f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 10:34:22.200947    5212 out.go:99] Starting "download-only-816200" primary control-plane node in "download-only-816200" cluster
	I0908 10:34:22.200947    5212 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0908 10:34:22.264489    5212 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0908 10:34:22.264612    5212 cache.go:58] Caching tarball of preloaded images
	I0908 10:34:22.264992    5212 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0908 10:34:22.269038    5212 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0908 10:34:22.269139    5212 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 ...
	I0908 10:34:22.346101    5212 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4?checksum=md5:994a4de1464928e89c992dfd0a962e35 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0908 10:34:25.545351    5212 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 ...
	I0908 10:34:25.546271    5212 preload.go:254] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 ...
	I0908 10:34:26.401143    5212 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0908 10:34:26.402175    5212 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-816200\config.json ...
	I0908 10:34:26.402175    5212 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-816200\config.json: {Name:mkeec45d08bf4c9ab07ac9fb351169cad83aad97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 10:34:26.404382    5212 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0908 10:34:26.405546    5212 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\windows\amd64\v1.34.0/kubectl.exe
	
	
	* The control-plane node download-only-816200 host does not exist
	  To start a cluster, run: "minikube start -p download-only-816200"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.89s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.89s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-816200
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.69s)

                                                
                                    
x
+
TestBinaryMirror (6.97s)

                                                
                                                
=== RUN   TestBinaryMirror
I0908 10:34:31.196406   11628 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/windows/amd64/kubectl.exe.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-269900 --alsologtostderr --binary-mirror http://127.0.0.1:49610 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-269900 --alsologtostderr --binary-mirror http://127.0.0.1:49610 --driver=hyperv: (6.2700092s)
helpers_test.go:175: Cleaning up "binary-mirror-269900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-269900
--- PASS: TestBinaryMirror (6.97s)

                                                
                                    
x
+
TestOffline (411.2s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-208500 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-208500 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=hyperv: (6m10.4787635s)
helpers_test.go:175: Cleaning up "offline-docker-208500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-208500
E0908 13:05:53.524761   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-208500: (40.7176624s)
--- PASS: TestOffline (411.20s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.33s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-020700
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-020700: exit status 85 (333.8113ms)

                                                
                                                
-- stdout --
	* Profile "addons-020700" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-020700"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.33s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.33s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-020700
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-020700: exit status 85 (331.286ms)

                                                
                                                
-- stdout --
	* Profile "addons-020700" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-020700"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.33s)

                                                
                                    
x
+
TestAddons/Setup (491.82s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-020700 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=hyperv --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-020700 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=hyperv --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (8m11.816013s)
--- PASS: TestAddons/Setup (491.82s)

                                                
                                    
x
+
TestAddons/serial/Volcano (65.58s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:868: volcano-scheduler stabilized in 21.1407ms
addons_test.go:876: volcano-admission stabilized in 21.1407ms
addons_test.go:884: volcano-controller stabilized in 21.1407ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-799f64f894-7crnm" [cacfb1d0-9003-479e-a6dd-e1e054dd7a29] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.0058571s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-589c7dd587-pcqpc" [6967afb1-39f2-41a9-9d1c-30351d01093d] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.0069648s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-7dc6969b45-nkbdl" [0374198d-d115-4239-97e1-b4a665a0e560] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.0078484s
addons_test.go:903: (dbg) Run:  kubectl --context addons-020700 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-020700 create -f testdata\vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-020700 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [2a37eca6-0015-4ed4-a9b6-bed3150f29dc] Pending
helpers_test.go:352: "test-job-nginx-0" [2a37eca6-0015-4ed4-a9b6-bed3150f29dc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [2a37eca6-0015-4ed4-a9b6-bed3150f29dc] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 23.0065548s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-020700 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-020700 addons disable volcano --alsologtostderr -v=1: (25.6041956s)
--- PASS: TestAddons/serial/Volcano (65.58s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-020700 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-020700 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.34s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.56s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-020700 create -f testdata\busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-020700 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [84fe736e-36bb-4d5a-ba24-9438fb7d157f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [84fe736e-36bb-4d5a-ba24-9438fb7d157f] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.0069618s
addons_test.go:694: (dbg) Run:  kubectl --context addons-020700 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-020700 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-020700 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-020700 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.56s)

                                                
                                    
x
+
TestAddons/parallel/Registry (35.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 8.4541ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-c2gt5" [b2ef5c36-d19d-447a-b50b-d894caa69ab6] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005369s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-f5bmq" [41c92c49-671c-47d6-8521-76a54ec90685] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.0059862s
addons_test.go:392: (dbg) Run:  kubectl --context addons-020700 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-020700 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-020700 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.3188261s)
addons_test.go:411: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-020700 ip
addons_test.go:411: (dbg) Done: out/minikube-windows-amd64.exe -p addons-020700 ip: (2.8042759s)
2025/09/08 10:44:56 [DEBUG] GET http://172.20.63.132:5000
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-020700 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-020700 addons disable registry --alsologtostderr -v=1: (16.1902451s)
--- PASS: TestAddons/parallel/Registry (35.58s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (15.84s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 8.8726ms
addons_test.go:325: (dbg) Run:  out/minikube-windows-amd64.exe addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-020700
addons_test.go:332: (dbg) Run:  kubectl --context addons-020700 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-020700 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-020700 addons disable registry-creds --alsologtostderr -v=1: (15.2249804s)
--- PASS: TestAddons/parallel/RegistryCreds (15.84s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (67.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-020700 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-020700 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-020700 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [ecfa05ae-6420-4662-a30c-2d5cbfc00fc6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [ecfa05ae-6420-4662-a30c-2d5cbfc00fc6] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.0082743s
I0908 10:45:35.733939   11628 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-020700 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Done: out/minikube-windows-amd64.exe -p addons-020700 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (9.9184138s)
addons_test.go:288: (dbg) Run:  kubectl --context addons-020700 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-020700 ip
addons_test.go:293: (dbg) Done: out/minikube-windows-amd64.exe -p addons-020700 ip: (2.5518892s)
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 172.20.63.132
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-020700 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-020700 addons disable ingress-dns --alsologtostderr -v=1: (16.5382952s)
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-020700 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-020700 addons disable ingress --alsologtostderr -v=1: (22.8643553s)
--- PASS: TestAddons/parallel/Ingress (67.16s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (13.83s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-x2qpv" [0aeb96ba-f4ef-48b8-89b7-d20518a42cec] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0085329s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-020700 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-020700 addons disable inspektor-gadget --alsologtostderr -v=1: (7.8130811s)
--- PASS: TestAddons/parallel/InspektorGadget (13.83s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (21.23s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 16.3609ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-7k6qx" [7bad1e09-7737-455d-bde9-b18bd8aecd5c] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0094852s
addons_test.go:463: (dbg) Run:  kubectl --context addons-020700 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-020700 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-020700 addons disable metrics-server --alsologtostderr -v=1: (15.9785874s)
--- PASS: TestAddons/parallel/MetricsServer (21.23s)

                                                
                                    
x
+
TestAddons/parallel/CSI (91.95s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0908 10:45:20.299658   11628 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0908 10:45:20.313081   11628 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0908 10:45:20.313081   11628 kapi.go:107] duration metric: took 13.5281ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 13.5281ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-020700 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-020700 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [746c6d64-33b4-4e6a-91c6-667d95f4c7e3] Pending
helpers_test.go:352: "task-pv-pod" [746c6d64-33b4-4e6a-91c6-667d95f4c7e3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [746c6d64-33b4-4e6a-91c6-667d95f4c7e3] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.0091599s
addons_test.go:572: (dbg) Run:  kubectl --context addons-020700 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-020700 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-020700 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-020700 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-020700 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-020700 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-020700 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [39027373-7f94-4a88-aeeb-f16864d4045d] Pending
helpers_test.go:352: "task-pv-pod-restore" [39027373-7f94-4a88-aeeb-f16864d4045d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [39027373-7f94-4a88-aeeb-f16864d4045d] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.0088562s
addons_test.go:614: (dbg) Run:  kubectl --context addons-020700 delete pod task-pv-pod-restore
addons_test.go:614: (dbg) Done: kubectl --context addons-020700 delete pod task-pv-pod-restore: (1.4020757s)
addons_test.go:618: (dbg) Run:  kubectl --context addons-020700 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-020700 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-020700 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-020700 addons disable volumesnapshots --alsologtostderr -v=1: (15.6215008s)
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-020700 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-020700 addons disable csi-hostpath-driver --alsologtostderr -v=1: (20.7393459s)
--- PASS: TestAddons/parallel/CSI (91.95s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (43.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-020700 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-020700 --alsologtostderr -v=1: (15.8900681s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6f46646d79-28qxk" [298ad0bb-a816-4303-a25e-0caafe98d4a2] Pending
helpers_test.go:352: "headlamp-6f46646d79-28qxk" [298ad0bb-a816-4303-a25e-0caafe98d4a2] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6f46646d79-28qxk" [298ad0bb-a816-4303-a25e-0caafe98d4a2] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 19.0234662s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-020700 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-020700 addons disable headlamp --alsologtostderr -v=1: (8.0918695s)
--- PASS: TestAddons/parallel/Headlamp (43.01s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (22s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-lvmbh" [10aba3e0-9970-4ae4-815d-757b1d74c4ca] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.0090918s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-020700 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-020700 addons disable cloud-spanner --alsologtostderr -v=1: (15.965142s)
--- PASS: TestAddons/parallel/CloudSpanner (22.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (84.89s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-020700 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-020700 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-020700 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [a16a799d-24d5-45ad-bffe-00eef9b1db9b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [a16a799d-24d5-45ad-bffe-00eef9b1db9b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [a16a799d-24d5-45ad-bffe-00eef9b1db9b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.0070811s
addons_test.go:967: (dbg) Run:  kubectl --context addons-020700 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-020700 ssh "cat /opt/local-path-provisioner/pvc-5e60a254-1ed6-4730-a282-c062a8245062_default_test-pvc/file1"
addons_test.go:976: (dbg) Done: out/minikube-windows-amd64.exe -p addons-020700 ssh "cat /opt/local-path-provisioner/pvc-5e60a254-1ed6-4730-a282-c062a8245062_default_test-pvc/file1": (10.5613278s)
addons_test.go:988: (dbg) Run:  kubectl --context addons-020700 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-020700 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-020700 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-020700 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (1m0.5883617s)
--- PASS: TestAddons/parallel/LocalPath (84.89s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (22.34s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-qqppv" [f707e9da-4b58-43f6-a612-244eb34d0d7c] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0070454s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-020700 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-020700 addons disable nvidia-device-plugin --alsologtostderr -v=1: (16.3325498s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (22.34s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (26.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-74826" [b54a7420-4fde-4695-a40f-c9eb14b1f267] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.006986s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-020700 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-020700 addons disable yakd --alsologtostderr -v=1: (20.6725928s)
--- PASS: TestAddons/parallel/Yakd (26.68s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (54.49s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-020700
addons_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-020700: (42.297453s)
addons_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-020700
addons_test.go:176: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-020700: (4.8876994s)
addons_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-020700
addons_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-020700: (4.657598s)
addons_test.go:185: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-020700
addons_test.go:185: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-020700: (2.6474658s)
--- PASS: TestAddons/StoppedEnableDisable (54.49s)

                                                
                                    
x
+
TestCertOptions (451.95s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-167500 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-167500 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (6m30.242987s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-167500 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
E0908 13:25:15.357742   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-167500 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (10.1969374s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-167500 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-167500 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-167500 -- "sudo cat /etc/kubernetes/admin.conf": (9.9706228s)
helpers_test.go:175: Cleaning up "cert-options-167500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-167500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-167500: (41.3886186s)
--- PASS: TestCertOptions (451.95s)

                                                
                                    
x
+
TestCertExpiration (882.71s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-367100 --memory=3072 --cert-expiration=3m --driver=hyperv
E0908 13:05:15.342773   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-367100 --memory=3072 --cert-expiration=3m --driver=hyperv: (7m50.4021812s)
E0908 13:12:50.434222   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-367100 --memory=3072 --cert-expiration=8760h --driver=hyperv
E0908 13:15:15.349781   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-367100 --memory=3072 --cert-expiration=8760h --driver=hyperv: (3m4.5450813s)
helpers_test.go:175: Cleaning up "cert-expiration-367100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-367100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-367100: (47.7574199s)
--- PASS: TestCertExpiration (882.71s)

                                                
                                    
x
+
TestDockerFlags (560.98s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-985700 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-985700 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (8m17.7819324s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-985700 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-985700 ssh "sudo systemctl show docker --property=Environment --no-pager": (9.8392433s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-985700 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-985700 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (10.5634715s)
helpers_test.go:175: Cleaning up "docker-flags-985700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-985700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-985700: (42.7900351s)
--- PASS: TestDockerFlags (560.98s)

                                                
                                    
x
+
TestForceSystemdFlag (256.08s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-208500 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-208500 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (3m18.5352948s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-208500 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-208500 ssh "docker info --format {{.CgroupDriver}}": (10.0025232s)
helpers_test.go:175: Cleaning up "force-systemd-flag-208500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-208500
E0908 13:02:50.425869   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-208500: (47.5435s)
--- PASS: TestForceSystemdFlag (256.08s)

                                                
                                    
x
+
TestForceSystemdEnv (424.37s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-844300 --memory=3072 --alsologtostderr -v=5 --driver=hyperv
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-844300 --memory=3072 --alsologtostderr -v=5 --driver=hyperv: (6m7.2941652s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-844300 ssh "docker info --format {{.CgroupDriver}}"
E0908 13:22:50.441426   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-844300 ssh "docker info --format {{.CgroupDriver}}": (10.0561487s)
helpers_test.go:175: Cleaning up "force-systemd-env-844300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-844300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-844300: (47.0178256s)
--- PASS: TestForceSystemdEnv (424.37s)

                                                
                                    
x
+
TestErrorSpam/start (16.9s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-404300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-404300 start --dry-run
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-404300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-404300 start --dry-run: (5.5703333s)
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-404300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-404300 start --dry-run
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-404300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-404300 start --dry-run: (5.6426091s)
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-404300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-404300 start --dry-run
error_spam_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-404300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-404300 start --dry-run: (5.6768243s)
--- PASS: TestErrorSpam/start (16.90s)

                                                
                                    
x
+
TestErrorSpam/status (36.18s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-404300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-404300 status
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-404300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-404300 status: (12.3038605s)
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-404300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-404300 status
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-404300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-404300 status: (11.8408553s)
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-404300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-404300 status
error_spam_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-404300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-404300 status: (12.0335001s)
--- PASS: TestErrorSpam/status (36.18s)

                                                
                                    
x
+
TestErrorSpam/pause (22.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-404300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-404300 pause
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-404300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-404300 pause: (7.8090783s)
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-404300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-404300 pause
E0908 10:52:50.328920   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:52:50.336861   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:52:50.348924   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:52:50.371519   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:52:50.413734   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:52:50.496039   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:52:50.658913   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:52:50.980919   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:52:51.623413   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-404300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-404300 pause: (7.4038021s)
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-404300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-404300 pause
E0908 10:52:52.905925   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:52:55.468594   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-404300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-404300 pause: (7.3190572s)
--- PASS: TestErrorSpam/pause (22.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (22.7s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-404300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-404300 unpause
E0908 10:53:00.591695   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-404300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-404300 unpause: (7.6184189s)
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-404300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-404300 unpause
E0908 10:53:10.834238   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-404300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-404300 unpause: (7.6298773s)
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-404300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-404300 unpause
error_spam_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-404300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-404300 unpause: (7.4520345s)
--- PASS: TestErrorSpam/unpause (22.70s)

                                                
                                    
x
+
TestErrorSpam/stop (61.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-404300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-404300 stop
E0908 10:53:31.317456   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-404300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-404300 stop: (39.9199768s)
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-404300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-404300 stop
E0908 10:54:12.279994   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-404300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-404300 stop: (10.8891878s)
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-404300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-404300 stop
error_spam_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-404300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-404300 stop: (10.4498681s)
--- PASS: TestErrorSpam/stop (61.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\11628\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (220.28s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-264100 --memory=4096 --apiserver-port=8441 --wait=all --driver=hyperv
E0908 10:55:34.203972   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:57:50.333710   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:58:18.048923   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-264100 --memory=4096 --apiserver-port=8441 --wait=all --driver=hyperv: (3m40.2676655s)
--- PASS: TestFunctional/serial/StartWithProxy (220.28s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (142.16s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0908 10:58:20.034546   11628 config.go:182] Loaded profile config "functional-264100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-264100 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-264100 --alsologtostderr -v=8: (2m22.1600026s)
functional_test.go:678: soft start took 2m22.1614344s for "functional-264100" cluster.
I0908 11:00:42.197477   11628 config.go:182] Loaded profile config "functional-264100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (142.16s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.13s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-264100 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (33.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 cache add registry.k8s.io/pause:3.1: (10.9605219s)
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 cache add registry.k8s.io/pause:3.3: (10.826273s)
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 cache add registry.k8s.io/pause:latest: (11.2746136s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (33.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (12.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-264100 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2508135756\001
functional_test.go:1092: (dbg) Done: docker build -t minikube-local-cache-test:functional-264100 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2508135756\001: (1.9872465s)
functional_test.go:1104: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 cache add minikube-local-cache-test:functional-264100
functional_test.go:1104: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 cache add minikube-local-cache-test:functional-264100: (10.587923s)
functional_test.go:1109: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 cache delete minikube-local-cache-test:functional-264100
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-264100
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (12.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 ssh sudo crictl images
functional_test.go:1139: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 ssh sudo crictl images: (9.5602824s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (38.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1162: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 ssh sudo docker rmi registry.k8s.io/pause:latest: (9.3274397s)
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-264100 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (9.3332106s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 cache reload: (10.4982794s)
functional_test.go:1178: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1178: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (9.2804481s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (38.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.58s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 kubectl -- --context functional-264100 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.52s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (3.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out\kubectl.exe --context functional-264100 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (3.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (133.49s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-264100 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0908 11:02:50.335452   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-264100 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m13.4863386s)
functional_test.go:776: restart took 2m13.4868007s for "functional-264100" cluster.
I0908 11:04:34.880962   11628 config.go:182] Loaded profile config "functional-264100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (133.49s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-264100 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.18s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (8.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 logs
functional_test.go:1251: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 logs: (8.4638072s)
--- PASS: TestFunctional/serial/LogsCmd (8.46s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (10.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2574655545\001\logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2574655545\001\logs.txt: (10.5039468s)
--- PASS: TestFunctional/serial/LogsFileCmd (10.52s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (20.76s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-264100 apply -f testdata\invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-264100
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-264100: exit status 115 (16.3654769s)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://172.20.61.180:31969 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_service_8fb87d8e79e761d215f3221b4a4d8a6300edfb06_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-264100 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (20.76s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-264100 config get cpus: exit status 14 (254.826ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-264100 config get cpus: exit status 14 (228.3065ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (41.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 status
functional_test.go:869: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 status: (14.1024253s)
functional_test.go:875: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (14.2207828s)
functional_test.go:887: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 status -o json
functional_test.go:887: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 status -o json: (12.8873553s)
--- PASS: TestFunctional/parallel/StatusCmd (41.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (27.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-264100 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-264100 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-8dgm7" [3ea9c6a1-3c23-4b6a-be52-d1e545f7ef96] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-8dgm7" [3ea9c6a1-3c23-4b6a-be52-d1e545f7ef96] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.0057776s
functional_test.go:1654: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 service hello-node-connect --url
functional_test.go:1654: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 service hello-node-connect --url: (18.6476284s)
functional_test.go:1660: found endpoint for hello-node-connect: http://172.20.61.180:31298
functional_test.go:1680: http://172.20.61.180:31298: success! body:
Request served by hello-node-connect-7d85dfc575-8dgm7

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 172.20.61.180:31298
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (27.09s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (39.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [dd0d3c21-c5e0-4c11-bfef-c2862acba1c0] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0082533s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-264100 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-264100 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-264100 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-264100 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [78b8a87b-166b-42c2-8d6f-47837335e08e] Pending
helpers_test.go:352: "sp-pod" [78b8a87b-166b-42c2-8d6f-47837335e08e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [78b8a87b-166b-42c2-8d6f-47837335e08e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.0068241s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-264100 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-264100 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-264100 delete -f testdata/storage-provisioner/pod.yaml: (1.6980919s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-264100 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [fc1b864a-6e18-4ee3-b781-b64d3cae4f9e] Pending
helpers_test.go:352: "sp-pod" [fc1b864a-6e18-4ee3-b781-b64d3cae4f9e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [fc1b864a-6e18-4ee3-b781-b64d3cae4f9e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.0059085s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-264100 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (39.94s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (23.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 ssh "echo hello"
functional_test.go:1730: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 ssh "echo hello": (11.6153938s)
functional_test.go:1747: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 ssh "cat /etc/hostname"
functional_test.go:1747: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 ssh "cat /etc/hostname": (11.3912931s)
--- PASS: TestFunctional/parallel/SSHCmd (23.01s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (60.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 cp testdata\cp-test.txt /home/docker/cp-test.txt: (9.0795998s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 ssh -n functional-264100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 ssh -n functional-264100 "sudo cat /home/docker/cp-test.txt": (11.6990224s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 cp functional-264100:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd1156762669\001\cp-test.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 cp functional-264100:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd1156762669\001\cp-test.txt: (10.6055345s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 ssh -n functional-264100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 ssh -n functional-264100 "sudo cat /home/docker/cp-test.txt": (10.1740674s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (7.9381978s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 ssh -n functional-264100 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 ssh -n functional-264100 "sudo cat /tmp/does/not/exist/cp-test.txt": (10.5555171s)
--- PASS: TestFunctional/parallel/CpCmd (60.06s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (58.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-264100 replace --force -f testdata\mysql.yaml
E0908 11:07:50.340109   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-hnlbk" [ac21e6a9-fc04-402a-8ada-56cfa52af093] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-hnlbk" [ac21e6a9-fc04-402a-8ada-56cfa52af093] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 47.0168338s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-264100 exec mysql-5bb876957f-hnlbk -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-264100 exec mysql-5bb876957f-hnlbk -- mysql -ppassword -e "show databases;": exit status 1 (264.2712ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0908 11:08:38.059308   11628 retry.go:31] will retry after 609.866871ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-264100 exec mysql-5bb876957f-hnlbk -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-264100 exec mysql-5bb876957f-hnlbk -- mysql -ppassword -e "show databases;": exit status 1 (321.1186ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0908 11:08:39.003148   11628 retry.go:31] will retry after 2.033813716s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-264100 exec mysql-5bb876957f-hnlbk -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-264100 exec mysql-5bb876957f-hnlbk -- mysql -ppassword -e "show databases;": exit status 1 (305.5178ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0908 11:08:41.354131   11628 retry.go:31] will retry after 2.441555149s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-264100 exec mysql-5bb876957f-hnlbk -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-264100 exec mysql-5bb876957f-hnlbk -- mysql -ppassword -e "show databases;": exit status 1 (311.9779ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0908 11:08:44.122342   11628 retry.go:31] will retry after 4.65767304s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-264100 exec mysql-5bb876957f-hnlbk -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (58.81s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (10.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/11628/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 ssh "sudo cat /etc/test/nested/copy/11628/hosts"
functional_test.go:1936: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 ssh "sudo cat /etc/test/nested/copy/11628/hosts": (10.1992079s)
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (10.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (61.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/11628.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 ssh "sudo cat /etc/ssl/certs/11628.pem"
functional_test.go:1978: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 ssh "sudo cat /etc/ssl/certs/11628.pem": (10.2389381s)
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/11628.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 ssh "sudo cat /usr/share/ca-certificates/11628.pem"
functional_test.go:1978: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 ssh "sudo cat /usr/share/ca-certificates/11628.pem": (10.1128752s)
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1978: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 ssh "sudo cat /etc/ssl/certs/51391683.0": (10.4155545s)
functional_test.go:2004: Checking for existence of /etc/ssl/certs/116282.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 ssh "sudo cat /etc/ssl/certs/116282.pem"
functional_test.go:2005: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 ssh "sudo cat /etc/ssl/certs/116282.pem": (10.4578698s)
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/116282.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 ssh "sudo cat /usr/share/ca-certificates/116282.pem"
functional_test.go:2005: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 ssh "sudo cat /usr/share/ca-certificates/116282.pem": (10.0546462s)
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2005: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (10.2772383s)
--- PASS: TestFunctional/parallel/CertSync (61.56s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-264100 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (10.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-264100 ssh "sudo systemctl is-active crio": exit status 1 (10.2989549s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (10.30s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2293: (dbg) Done: out/minikube-windows-amd64.exe license: (1.739817s)
--- PASS: TestFunctional/parallel/License (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-264100 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-264100 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-ljt2n" [81e1612f-e68e-48ee-9bdb-00c9bed7dfaf] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-ljt2n" [81e1612f-e68e-48ee-9bdb-00c9bed7dfaf] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.006859s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-264100 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-264100 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-264100 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-264100 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 9388: OpenProcess: The parameter is incorrect.
helpers_test.go:525: unable to kill pid 6180: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (14.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 service list
functional_test.go:1469: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 service list: (14.7559971s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (14.76s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-264100 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-264100 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [d1634cdc-3710-422c-afbf-36b0c4b33199] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [d1634cdc-3710-422c-afbf-36b0c4b33199] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 15.0250922s
I0908 11:05:41.747919   11628 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (13.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 service list -o json: (13.6023749s)
functional_test.go:1504: Took "13.6026913s" to run "out/minikube-windows-amd64.exe -p functional-264100 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (13.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-264100 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 6508: TerminateProcess: Access is denied.
helpers_test.go:525: unable to kill pid 9412: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (14.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1290: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (13.7352863s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (14.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (15.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1325: (dbg) Done: out/minikube-windows-amd64.exe profile list: (15.4213866s)
functional_test.go:1330: Took "15.4224323s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1344: Took "275.0887ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (15.70s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (13.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1376: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (13.3528905s)
functional_test.go:1381: Took "13.3530424s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1394: Took "263.3805ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (13.62s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (43.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:514: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-264100 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-264100"
functional_test.go:514: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-264100 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-264100": (28.5467629s)
functional_test.go:537: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-264100 docker-env | Invoke-Expression ; docker images"
functional_test.go:537: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-264100 docker-env | Invoke-Expression ; docker images": (15.197757s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (43.77s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 update-context --alsologtostderr -v=2
functional_test.go:2124: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 update-context --alsologtostderr -v=2: (2.7934595s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.79s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 update-context --alsologtostderr -v=2
functional_test.go:2124: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 update-context --alsologtostderr -v=2: (2.8467535s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.85s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 update-context --alsologtostderr -v=2
functional_test.go:2124: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 update-context --alsologtostderr -v=2: (2.480612s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.48s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 version --short
--- PASS: TestFunctional/parallel/Version/short (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (8.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 version -o=json --components: (8.2167241s)
--- PASS: TestFunctional/parallel/Version/components (8.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (8.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 image ls --format short --alsologtostderr: (8.013747s)
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-264100 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-264100
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-264100
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-264100 image ls --format short --alsologtostderr:
I0908 11:09:09.904756   12748 out.go:360] Setting OutFile to fd 1484 ...
I0908 11:09:10.009484   12748 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:09:10.009484   12748 out.go:374] Setting ErrFile to fd 1664...
I0908 11:09:10.009484   12748 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:09:10.025712   12748 config.go:182] Loaded profile config "functional-264100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 11:09:10.026777   12748 config.go:182] Loaded profile config "functional-264100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 11:09:10.027169   12748 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264100 ).state
I0908 11:09:12.567840   12748 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0908 11:09:12.567840   12748 main.go:141] libmachine: [stderr =====>] : 
I0908 11:09:12.579868   12748 ssh_runner.go:195] Run: systemctl --version
I0908 11:09:12.579868   12748 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264100 ).state
I0908 11:09:14.880182   12748 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0908 11:09:14.880253   12748 main.go:141] libmachine: [stderr =====>] : 
I0908 11:09:14.880314   12748 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264100 ).networkadapters[0]).ipaddresses[0]
I0908 11:09:17.597549   12748 main.go:141] libmachine: [stdout =====>] : 172.20.61.180

                                                
                                                
I0908 11:09:17.597613   12748 main.go:141] libmachine: [stderr =====>] : 
I0908 11:09:17.597613   12748 sshutil.go:53] new ssh client: &{IP:172.20.61.180 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-264100\id_rsa Username:docker}
I0908 11:09:17.716225   12748 ssh_runner.go:235] Completed: systemctl --version: (5.1362927s)
I0908 11:09:17.726642   12748 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (8.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (7.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 image ls --format table --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 image ls --format table --alsologtostderr: (7.8361318s)
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-264100 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-apiserver              │ v1.34.0           │ 90550c43ad2bc │ 88MB   │
│ registry.k8s.io/kube-controller-manager     │ v1.34.0           │ a0af72f2ec6d6 │ 74.9MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.0           │ 46169d968e920 │ 52.8MB │
│ docker.io/library/nginx                     │ alpine            │ 4a86014ec6994 │ 52.5MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0           │ 5f1f5298c888d │ 195MB  │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ docker.io/library/mysql                     │ 5.7               │ 5107333e08a87 │ 501MB  │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 52546a367cc9e │ 75MB   │
│ docker.io/library/minikube-local-cache-test │ functional-264100 │ a7ee1bac9e28f │ 30B    │
│ docker.io/library/nginx                     │ latest            │ ad5708199ec7d │ 192MB  │
│ registry.k8s.io/kube-proxy                  │ v1.34.0           │ df0860106674d │ 71.9MB │
│ docker.io/kicbase/echo-server               │ functional-264100 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-264100 image ls --format table --alsologtostderr:
I0908 11:09:17.984596    4604 out.go:360] Setting OutFile to fd 1144 ...
I0908 11:09:18.086812    4604 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:09:18.086812    4604 out.go:374] Setting ErrFile to fd 1456...
I0908 11:09:18.086812    4604 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:09:18.102500    4604 config.go:182] Loaded profile config "functional-264100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 11:09:18.103489    4604 config.go:182] Loaded profile config "functional-264100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 11:09:18.103714    4604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264100 ).state
I0908 11:09:20.419735    4604 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0908 11:09:20.419735    4604 main.go:141] libmachine: [stderr =====>] : 
I0908 11:09:20.433720    4604 ssh_runner.go:195] Run: systemctl --version
I0908 11:09:20.433720    4604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264100 ).state
I0908 11:09:22.747658    4604 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0908 11:09:22.747658    4604 main.go:141] libmachine: [stderr =====>] : 
I0908 11:09:22.747658    4604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264100 ).networkadapters[0]).ipaddresses[0]
I0908 11:09:25.486984    4604 main.go:141] libmachine: [stdout =====>] : 172.20.61.180

                                                
                                                
I0908 11:09:25.487061    4604 main.go:141] libmachine: [stderr =====>] : 
I0908 11:09:25.487355    4604 sshutil.go:53] new ssh client: &{IP:172.20.61.180 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-264100\id_rsa Username:docker}
I0908 11:09:25.602988    4604 ssh_runner.go:235] Completed: systemctl --version: (5.1692029s)
I0908 11:09:25.620846    4604 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (7.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (7.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 image ls --format json --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 image ls --format json --alsologtostderr: (7.8023788s)
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-264100 image ls --format json --alsologtostderr:
[{"id":"a7ee1bac9e28f1f99dbb53b71b408d05057f5be762023d67af316a445822a030","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-264100"],"size":"30"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"71900000"},{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"74900000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"75000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"6e38f40d628db3002f5617342c887
2c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"52800000"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"52500000"},{"id":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":[],"repoTags":["registry.k8s.io/kube-a
piserver:v1.34.0"],"size":"88000000"},{"id":"ad5708199ec7d169c6837fe46e1646603d0f7d0a0f54d3cd8d07bc1c818d0224","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-264100","docker.io/kicbase/echo-server:latest"],"size":"4940000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-264100 image ls --format json --alsologtostderr:
I0908 11:09:17.925596   11792 out.go:360] Setting OutFile to fd 1272 ...
I0908 11:09:18.001754   11792 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:09:18.002281   11792 out.go:374] Setting ErrFile to fd 1108...
I0908 11:09:18.002316   11792 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:09:18.018929   11792 config.go:182] Loaded profile config "functional-264100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 11:09:18.019015   11792 config.go:182] Loaded profile config "functional-264100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 11:09:18.019744   11792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264100 ).state
I0908 11:09:20.316534   11792 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0908 11:09:20.316534   11792 main.go:141] libmachine: [stderr =====>] : 
I0908 11:09:20.328535   11792 ssh_runner.go:195] Run: systemctl --version
I0908 11:09:20.328535   11792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264100 ).state
I0908 11:09:22.625395   11792 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0908 11:09:22.625395   11792 main.go:141] libmachine: [stderr =====>] : 
I0908 11:09:22.625395   11792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264100 ).networkadapters[0]).ipaddresses[0]
I0908 11:09:25.372538   11792 main.go:141] libmachine: [stdout =====>] : 172.20.61.180

                                                
                                                
I0908 11:09:25.372538   11792 main.go:141] libmachine: [stderr =====>] : 
I0908 11:09:25.373331   11792 sshutil.go:53] new ssh client: &{IP:172.20.61.180 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-264100\id_rsa Username:docker}
I0908 11:09:25.512297   11792 ssh_runner.go:235] Completed: systemctl --version: (5.1836962s)
I0908 11:09:25.524462   11792 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (7.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (8.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 image ls --format yaml --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 image ls --format yaml --alsologtostderr: (8.0711879s)
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-264100 image ls --format yaml --alsologtostderr:
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "71900000"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "74900000"
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "52800000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "75000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: a7ee1bac9e28f1f99dbb53b71b408d05057f5be762023d67af316a445822a030
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-264100
size: "30"
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "88000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-264100
- docker.io/kicbase/echo-server:latest
size: "4940000"
- id: ad5708199ec7d169c6837fe46e1646603d0f7d0a0f54d3cd8d07bc1c818d0224
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "52500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-264100 image ls --format yaml --alsologtostderr:
I0908 11:09:09.905751    3388 out.go:360] Setting OutFile to fd 1364 ...
I0908 11:09:10.038028    3388 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:09:10.038074    3388 out.go:374] Setting ErrFile to fd 1496...
I0908 11:09:10.038074    3388 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:09:10.056382    3388 config.go:182] Loaded profile config "functional-264100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 11:09:10.056382    3388 config.go:182] Loaded profile config "functional-264100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 11:09:10.057371    3388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264100 ).state
I0908 11:09:12.538440    3388 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0908 11:09:12.538440    3388 main.go:141] libmachine: [stderr =====>] : 
I0908 11:09:12.553983    3388 ssh_runner.go:195] Run: systemctl --version
I0908 11:09:12.554662    3388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264100 ).state
I0908 11:09:14.855667    3388 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0908 11:09:14.855667    3388 main.go:141] libmachine: [stderr =====>] : 
I0908 11:09:14.855667    3388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264100 ).networkadapters[0]).ipaddresses[0]
I0908 11:09:17.638793    3388 main.go:141] libmachine: [stdout =====>] : 172.20.61.180

                                                
                                                
I0908 11:09:17.639163    3388 main.go:141] libmachine: [stderr =====>] : 
I0908 11:09:17.639632    3388 sshutil.go:53] new ssh client: &{IP:172.20.61.180 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-264100\id_rsa Username:docker}
I0908 11:09:17.767362    3388 ssh_runner.go:235] Completed: systemctl --version: (5.2133135s)
I0908 11:09:17.779121    3388 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (8.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (28.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 ssh pgrep buildkitd
E0908 11:09:13.419480   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:323: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-264100 ssh pgrep buildkitd: exit status 1 (10.3605346s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 image build -t localhost/my-image:functional-264100 testdata\build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 image build -t localhost/my-image:functional-264100 testdata\build --alsologtostderr: (11.3743134s)
functional_test.go:338: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-264100 image build -t localhost/my-image:functional-264100 testdata\build --alsologtostderr:
I0908 11:09:20.287383   12320 out.go:360] Setting OutFile to fd 1712 ...
I0908 11:09:20.404563   12320 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:09:20.404563   12320 out.go:374] Setting ErrFile to fd 1716...
I0908 11:09:20.404563   12320 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:09:20.429768   12320 config.go:182] Loaded profile config "functional-264100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 11:09:20.457999   12320 config.go:182] Loaded profile config "functional-264100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 11:09:20.458721   12320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264100 ).state
I0908 11:09:22.747658   12320 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0908 11:09:22.747658   12320 main.go:141] libmachine: [stderr =====>] : 
I0908 11:09:22.761229   12320 ssh_runner.go:195] Run: systemctl --version
I0908 11:09:22.761229   12320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264100 ).state
I0908 11:09:25.044948   12320 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0908 11:09:25.045000   12320 main.go:141] libmachine: [stderr =====>] : 
I0908 11:09:25.045000   12320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264100 ).networkadapters[0]).ipaddresses[0]
I0908 11:09:27.624463   12320 main.go:141] libmachine: [stdout =====>] : 172.20.61.180

                                                
                                                
I0908 11:09:27.624463   12320 main.go:141] libmachine: [stderr =====>] : 
I0908 11:09:27.625301   12320 sshutil.go:53] new ssh client: &{IP:172.20.61.180 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-264100\id_rsa Username:docker}
I0908 11:09:27.736271   12320 ssh_runner.go:235] Completed: systemctl --version: (4.9749796s)
I0908 11:09:27.736473   12320 build_images.go:161] Building image from path: C:\Users\jenkins.minikube6\AppData\Local\Temp\build.3969445815.tar
I0908 11:09:27.747613   12320 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0908 11:09:27.783432   12320 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3969445815.tar
I0908 11:09:27.790932   12320 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3969445815.tar: stat -c "%s %y" /var/lib/minikube/build/build.3969445815.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3969445815.tar': No such file or directory
I0908 11:09:27.791528   12320 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\AppData\Local\Temp\build.3969445815.tar --> /var/lib/minikube/build/build.3969445815.tar (3072 bytes)
I0908 11:09:27.853166   12320 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3969445815
I0908 11:09:27.882933   12320 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3969445815 -xf /var/lib/minikube/build/build.3969445815.tar
I0908 11:09:27.908486   12320 docker.go:361] Building image: /var/lib/minikube/build/build.3969445815
I0908 11:09:27.919272   12320 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-264100 /var/lib/minikube/build/build.3969445815
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 ...

                                                
                                                
#5 [internal] load build context
#5 transferring context: 62B done
#5 DONE 0.1s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#4 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#4 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#4 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.2s done
#4 DONE 0.9s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 writing image sha256:125e7b214e30766d5d99b752df42f04784beba7d79478f0e43ddbb38cfa9540b done
#8 naming to localhost/my-image:functional-264100 0.0s done
#8 DONE 0.2s
I0908 11:09:31.412143   12320 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-264100 /var/lib/minikube/build/build.3969445815: (3.4927138s)
I0908 11:09:31.423422   12320 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3969445815
I0908 11:09:31.458828   12320 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3969445815.tar
I0908 11:09:31.486907   12320 build_images.go:217] Built localhost/my-image:functional-264100 from C:\Users\jenkins.minikube6\AppData\Local\Temp\build.3969445815.tar
I0908 11:09:31.486907   12320 build_images.go:133] succeeded building to: functional-264100
I0908 11:09:31.486907   12320 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 image ls
functional_test.go:466: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 image ls: (7.0556323s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (28.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.2714573s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-264100
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (19.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 image load --daemon kicbase/echo-server:functional-264100 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 image load --daemon kicbase/echo-server:functional-264100 --alsologtostderr: (11.1418591s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 image ls
functional_test.go:466: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 image ls: (8.08781s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (19.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (18.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 image load --daemon kicbase/echo-server:functional-264100 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 image load --daemon kicbase/echo-server:functional-264100 --alsologtostderr: (11.3154547s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 image ls
functional_test.go:466: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 image ls: (7.6152195s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (18.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (18.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-264100
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 image load --daemon kicbase/echo-server:functional-264100 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 image load --daemon kicbase/echo-server:functional-264100 --alsologtostderr: (10.2906454s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 image ls
functional_test.go:466: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 image ls: (7.1787434s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (18.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 image save kicbase/echo-server:functional-264100 C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:395: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 image save kicbase/echo-server:functional-264100 C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr: (7.7231591s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (14.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 image rm kicbase/echo-server:functional-264100 --alsologtostderr
functional_test.go:407: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 image rm kicbase/echo-server:functional-264100 --alsologtostderr: (7.4477283s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 image ls
functional_test.go:466: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 image ls: (7.2118107s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (14.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (14.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 image load C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 image load C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr: (7.4520927s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 image ls
functional_test.go:466: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 image ls: (7.078817s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (14.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (7.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-264100
functional_test.go:439: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264100 image save --daemon kicbase/echo-server:functional-264100 --alsologtostderr
functional_test.go:439: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264100 image save --daemon kicbase/echo-server:functional-264100 --alsologtostderr: (7.3712863s)
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-264100
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (7.57s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.21s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-264100
--- PASS: TestFunctional/delete_echo-server_images (0.21s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-264100
--- PASS: TestFunctional/delete_my-image_image (0.08s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-264100
--- PASS: TestFunctional/delete_minikube_cached_images (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (725.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=hyperv
E0908 11:12:50.342787   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:15:15.259562   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:15:15.267583   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:15:15.279890   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:15:15.302561   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:15:15.344704   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:15:15.427030   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:15:15.589035   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:15:15.911057   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:15:16.553797   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:15:17.836335   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:15:20.399302   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:15:25.522591   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:15:35.765678   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:15:56.248200   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:16:37.210353   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:17:50.347002   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:17:59.133771   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:20:15.263362   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:20:42.978683   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:22:50.351292   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=hyperv: (11m29.038443s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 status --alsologtostderr -v 5
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 status --alsologtostderr -v 5: (36.8635514s)
--- PASS: TestMultiControlPlane/serial/StartCluster (725.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (12.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 kubectl -- rollout status deployment/busybox: (5.2993957s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 kubectl -- exec busybox-7b57f96db7-2wjzs -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 kubectl -- exec busybox-7b57f96db7-2wjzs -- nslookup kubernetes.io: (1.3101862s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 kubectl -- exec busybox-7b57f96db7-9vn9f -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 kubectl -- exec busybox-7b57f96db7-qhn4b -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 kubectl -- exec busybox-7b57f96db7-2wjzs -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 kubectl -- exec busybox-7b57f96db7-9vn9f -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 kubectl -- exec busybox-7b57f96db7-qhn4b -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 kubectl -- exec busybox-7b57f96db7-2wjzs -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 kubectl -- exec busybox-7b57f96db7-9vn9f -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 kubectl -- exec busybox-7b57f96db7-qhn4b -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (12.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (251.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 node add --alsologtostderr -v 5
E0908 11:27:50.355308   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 node add --alsologtostderr -v 5: (3m23.1678996s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 status --alsologtostderr -v 5: (48.5095176s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (251.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-331000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (48.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
E0908 11:30:15.270868   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (48.2070343s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (48.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (631.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 status --output json --alsologtostderr -v 5
E0908 11:31:38.349729   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:328: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 status --output json --alsologtostderr -v 5: (48.445785s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 cp testdata\cp-test.txt ha-331000:/home/docker/cp-test.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 cp testdata\cp-test.txt ha-331000:/home/docker/cp-test.txt: (9.586627s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000 "sudo cat /home/docker/cp-test.txt": (9.3658177s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 cp ha-331000:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile984883306\001\cp-test_ha-331000.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 cp ha-331000:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile984883306\001\cp-test_ha-331000.txt: (9.4893406s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000 "sudo cat /home/docker/cp-test.txt": (9.6617951s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 cp ha-331000:/home/docker/cp-test.txt ha-331000-m02:/home/docker/cp-test_ha-331000_ha-331000-m02.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 cp ha-331000:/home/docker/cp-test.txt ha-331000-m02:/home/docker/cp-test_ha-331000_ha-331000-m02.txt: (16.9056698s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000 "sudo cat /home/docker/cp-test.txt"
E0908 11:32:50.358595   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000 "sudo cat /home/docker/cp-test.txt": (9.6924286s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m02 "sudo cat /home/docker/cp-test_ha-331000_ha-331000-m02.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m02 "sudo cat /home/docker/cp-test_ha-331000_ha-331000-m02.txt": (9.5302334s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 cp ha-331000:/home/docker/cp-test.txt ha-331000-m03:/home/docker/cp-test_ha-331000_ha-331000-m03.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 cp ha-331000:/home/docker/cp-test.txt ha-331000-m03:/home/docker/cp-test_ha-331000_ha-331000-m03.txt: (16.7510648s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000 "sudo cat /home/docker/cp-test.txt": (9.5701454s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m03 "sudo cat /home/docker/cp-test_ha-331000_ha-331000-m03.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m03 "sudo cat /home/docker/cp-test_ha-331000_ha-331000-m03.txt": (9.4989957s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 cp ha-331000:/home/docker/cp-test.txt ha-331000-m04:/home/docker/cp-test_ha-331000_ha-331000-m04.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 cp ha-331000:/home/docker/cp-test.txt ha-331000-m04:/home/docker/cp-test_ha-331000_ha-331000-m04.txt: (16.4406248s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000 "sudo cat /home/docker/cp-test.txt": (9.5292384s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m04 "sudo cat /home/docker/cp-test_ha-331000_ha-331000-m04.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m04 "sudo cat /home/docker/cp-test_ha-331000_ha-331000-m04.txt": (9.4677842s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 cp testdata\cp-test.txt ha-331000-m02:/home/docker/cp-test.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 cp testdata\cp-test.txt ha-331000-m02:/home/docker/cp-test.txt: (9.5936178s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m02 "sudo cat /home/docker/cp-test.txt": (9.5267933s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 cp ha-331000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile984883306\001\cp-test_ha-331000-m02.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 cp ha-331000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile984883306\001\cp-test_ha-331000-m02.txt: (9.6341659s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m02 "sudo cat /home/docker/cp-test.txt": (9.600401s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 cp ha-331000-m02:/home/docker/cp-test.txt ha-331000:/home/docker/cp-test_ha-331000-m02_ha-331000.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 cp ha-331000-m02:/home/docker/cp-test.txt ha-331000:/home/docker/cp-test_ha-331000-m02_ha-331000.txt: (16.8201056s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m02 "sudo cat /home/docker/cp-test.txt"
E0908 11:35:15.274464   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m02 "sudo cat /home/docker/cp-test.txt": (9.4593131s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000 "sudo cat /home/docker/cp-test_ha-331000-m02_ha-331000.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000 "sudo cat /home/docker/cp-test_ha-331000-m02_ha-331000.txt": (9.6521399s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 cp ha-331000-m02:/home/docker/cp-test.txt ha-331000-m03:/home/docker/cp-test_ha-331000-m02_ha-331000-m03.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 cp ha-331000-m02:/home/docker/cp-test.txt ha-331000-m03:/home/docker/cp-test_ha-331000-m02_ha-331000-m03.txt: (16.6501817s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m02 "sudo cat /home/docker/cp-test.txt": (9.4431147s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m03 "sudo cat /home/docker/cp-test_ha-331000-m02_ha-331000-m03.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m03 "sudo cat /home/docker/cp-test_ha-331000-m02_ha-331000-m03.txt": (9.4532562s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 cp ha-331000-m02:/home/docker/cp-test.txt ha-331000-m04:/home/docker/cp-test_ha-331000-m02_ha-331000-m04.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 cp ha-331000-m02:/home/docker/cp-test.txt ha-331000-m04:/home/docker/cp-test_ha-331000-m02_ha-331000-m04.txt: (16.7978855s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m02 "sudo cat /home/docker/cp-test.txt": (9.7565674s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m04 "sudo cat /home/docker/cp-test_ha-331000-m02_ha-331000-m04.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m04 "sudo cat /home/docker/cp-test_ha-331000-m02_ha-331000-m04.txt": (9.5829665s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 cp testdata\cp-test.txt ha-331000-m03:/home/docker/cp-test.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 cp testdata\cp-test.txt ha-331000-m03:/home/docker/cp-test.txt: (9.6979534s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m03 "sudo cat /home/docker/cp-test.txt": (9.4969471s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 cp ha-331000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile984883306\001\cp-test_ha-331000-m03.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 cp ha-331000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile984883306\001\cp-test_ha-331000-m03.txt: (9.5227928s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m03 "sudo cat /home/docker/cp-test.txt": (9.4642459s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 cp ha-331000-m03:/home/docker/cp-test.txt ha-331000:/home/docker/cp-test_ha-331000-m03_ha-331000.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 cp ha-331000-m03:/home/docker/cp-test.txt ha-331000:/home/docker/cp-test_ha-331000-m03_ha-331000.txt: (16.6526316s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m03 "sudo cat /home/docker/cp-test.txt": (9.6057566s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000 "sudo cat /home/docker/cp-test_ha-331000-m03_ha-331000.txt"
E0908 11:37:50.362087   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000 "sudo cat /home/docker/cp-test_ha-331000-m03_ha-331000.txt": (9.6165322s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 cp ha-331000-m03:/home/docker/cp-test.txt ha-331000-m02:/home/docker/cp-test_ha-331000-m03_ha-331000-m02.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 cp ha-331000-m03:/home/docker/cp-test.txt ha-331000-m02:/home/docker/cp-test_ha-331000-m03_ha-331000-m02.txt: (16.5064149s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m03 "sudo cat /home/docker/cp-test.txt": (9.5582653s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m02 "sudo cat /home/docker/cp-test_ha-331000-m03_ha-331000-m02.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m02 "sudo cat /home/docker/cp-test_ha-331000-m03_ha-331000-m02.txt": (9.7035044s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 cp ha-331000-m03:/home/docker/cp-test.txt ha-331000-m04:/home/docker/cp-test_ha-331000-m03_ha-331000-m04.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 cp ha-331000-m03:/home/docker/cp-test.txt ha-331000-m04:/home/docker/cp-test_ha-331000-m03_ha-331000-m04.txt: (16.839087s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m03 "sudo cat /home/docker/cp-test.txt": (9.4789735s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m04 "sudo cat /home/docker/cp-test_ha-331000-m03_ha-331000-m04.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m04 "sudo cat /home/docker/cp-test_ha-331000-m03_ha-331000-m04.txt": (9.4786373s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 cp testdata\cp-test.txt ha-331000-m04:/home/docker/cp-test.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 cp testdata\cp-test.txt ha-331000-m04:/home/docker/cp-test.txt: (9.5131134s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m04 "sudo cat /home/docker/cp-test.txt": (9.4801777s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 cp ha-331000-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile984883306\001\cp-test_ha-331000-m04.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 cp ha-331000-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile984883306\001\cp-test_ha-331000-m04.txt: (9.6653879s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m04 "sudo cat /home/docker/cp-test.txt": (9.575871s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 cp ha-331000-m04:/home/docker/cp-test.txt ha-331000:/home/docker/cp-test_ha-331000-m04_ha-331000.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 cp ha-331000-m04:/home/docker/cp-test.txt ha-331000:/home/docker/cp-test_ha-331000-m04_ha-331000.txt: (16.8664248s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m04 "sudo cat /home/docker/cp-test.txt": (9.7297054s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000 "sudo cat /home/docker/cp-test_ha-331000-m04_ha-331000.txt"
E0908 11:40:15.278500   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000 "sudo cat /home/docker/cp-test_ha-331000-m04_ha-331000.txt": (9.4948559s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 cp ha-331000-m04:/home/docker/cp-test.txt ha-331000-m02:/home/docker/cp-test_ha-331000-m04_ha-331000-m02.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 cp ha-331000-m04:/home/docker/cp-test.txt ha-331000-m02:/home/docker/cp-test_ha-331000-m04_ha-331000-m02.txt: (16.8259208s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m04 "sudo cat /home/docker/cp-test.txt": (9.5930407s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m02 "sudo cat /home/docker/cp-test_ha-331000-m04_ha-331000-m02.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m02 "sudo cat /home/docker/cp-test_ha-331000-m04_ha-331000-m02.txt": (9.4371425s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 cp ha-331000-m04:/home/docker/cp-test.txt ha-331000-m03:/home/docker/cp-test_ha-331000-m04_ha-331000-m03.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 cp ha-331000-m04:/home/docker/cp-test.txt ha-331000-m03:/home/docker/cp-test_ha-331000-m04_ha-331000-m03.txt: (16.5090749s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m04 "sudo cat /home/docker/cp-test.txt": (9.4902783s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m03 "sudo cat /home/docker/cp-test_ha-331000-m04_ha-331000-m03.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-331000 ssh -n ha-331000-m03 "sudo cat /home/docker/cp-test_ha-331000-m04_ha-331000-m03.txt": (9.5305441s)
--- PASS: TestMultiControlPlane/serial/CopyFile (631.29s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (190.73s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-192800 --driver=hyperv
E0908 11:47:50.370177   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:48:18.364818   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-192800 --driver=hyperv: (3m10.7295519s)
--- PASS: TestImageBuild/serial/Setup (190.73s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (10.69s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-192800
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-192800: (10.6872076s)
--- PASS: TestImageBuild/serial/NormalBuild (10.69s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (8.97s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-192800
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-192800: (8.9718883s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (8.97s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (8.22s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-192800
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-192800: (8.2166462s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (8.22s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (8.29s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-192800
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-192800: (8.2871457s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (8.29s)

                                                
                                    
x
+
TestJSONOutput/start/Command (226.19s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-224700 --output=json --user=testUser --memory=3072 --wait=true --driver=hyperv
E0908 11:50:15.286162   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:52:50.374134   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-224700 --output=json --user=testUser --memory=3072 --wait=true --driver=hyperv: (3m46.184082s)
--- PASS: TestJSONOutput/start/Command (226.19s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (7.99s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-224700 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-224700 --output=json --user=testUser: (7.9872154s)
--- PASS: TestJSONOutput/pause/Command (7.99s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (7.7s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-224700 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-224700 --output=json --user=testUser: (7.6989897s)
--- PASS: TestJSONOutput/unpause/Command (7.70s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (39.22s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-224700 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-224700 --output=json --user=testUser: (39.2207762s)
--- PASS: TestJSONOutput/stop/Command (39.22s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.94s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-166000 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-166000 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (275.8681ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b2a26c5f-32ed-4f77-9fb6-2bab8afcc729","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-166000] minikube v1.36.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"607bd39c-ff3f-4879-991f-9e2bcdffb301","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube6\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"8cde5da6-36da-4f4e-bfed-6eb4d646b514","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"54b86514-8ac2-4e11-8c3d-80005a5268d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"aec50f4e-547b-4e58-8384-2c1c9005af34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21512"}}
	{"specversion":"1.0","id":"f869acc2-f8a5-407a-b55f-cf2644b0a7a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0335c0a4-291a-4c45-bdb0-bc878e110a70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-166000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-166000
--- PASS: TestErrorJSONOutput (0.94s)

                                                
                                    
x
+
TestMainNoArgs (0.23s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.23s)

                                                
                                    
x
+
TestMinikubeProfile (522.69s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-087300 --driver=hyperv
E0908 11:55:15.290162   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:57:50.378312   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-087300 --driver=hyperv: (3m10.3293391s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-087300 --driver=hyperv
E0908 11:59:13.464215   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:00:15.293848   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-087300 --driver=hyperv: (3m17.0722849s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-087300
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (23.6761432s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-087300
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (23.8418558s)
helpers_test.go:175: Cleaning up "second-087300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-087300
E0908 12:02:50.380708   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-087300: (40.7976836s)
helpers_test.go:175: Cleaning up "first-087300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-087300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-087300: (46.2976362s)
--- PASS: TestMinikubeProfile (522.69s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (155.51s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-476900 --memory=3072 --mount-string C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMountStartserial3517297216\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0908 12:04:58.379578   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:05:15.297414   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-476900 --memory=3072 --mount-string C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMountStartserial3517297216\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m34.5080551s)
--- PASS: TestMountStart/serial/StartWithMountFirst (155.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.49s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-476900 ssh -- ls /minikube-host
mount_start_test.go:134: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-476900 ssh -- ls /minikube-host: (9.4880516s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.49s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (155.87s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-476900 --memory=3072 --mount-string C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMountStartserial3517297216\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0908 12:07:50.385643   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-476900 --memory=3072 --mount-string C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMountStartserial3517297216\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m34.8731138s)
--- PASS: TestMountStart/serial/StartWithMountSecond (155.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-476900 ssh -- ls /minikube-host
mount_start_test.go:134: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-476900 ssh -- ls /minikube-host: (9.4425092s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.44s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (30.78s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-476900 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-476900 --alsologtostderr -v=5: (30.774985s)
--- PASS: TestMountStart/serial/DeleteFirst (30.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (9.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-476900 ssh -- ls /minikube-host
mount_start_test.go:134: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-476900 ssh -- ls /minikube-host: (9.3321616s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (9.33s)

                                                
                                    
x
+
TestMountStart/serial/Stop (28.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-476900
E0908 12:10:15.301476   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:196: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-476900: (28.2924804s)
--- PASS: TestMountStart/serial/Stop (28.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (114.78s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-476900
mount_start_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-476900: (1m53.7846433s)
--- PASS: TestMountStart/serial/RestartStopped (114.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (9.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-476900 ssh -- ls /minikube-host
mount_start_test.go:134: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-476900 ssh -- ls /minikube-host: (9.3058222s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (9.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (432.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-818700 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=hyperv
E0908 12:15:15.304749   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:15:53.478671   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:17:50.393572   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-818700 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=hyperv: (6m49.3158078s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 status --alsologtostderr
E0908 12:20:15.308822   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 status --alsologtostderr: (23.4554476s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (432.77s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-818700 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-818700 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-818700 -- rollout status deployment/busybox: (4.4407617s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-818700 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-818700 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-818700 -- exec busybox-7b57f96db7-ndqg5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-818700 -- exec busybox-7b57f96db7-ndqg5 -- nslookup kubernetes.io: (1.2320617s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-818700 -- exec busybox-7b57f96db7-ztvwm -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-818700 -- exec busybox-7b57f96db7-ndqg5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-818700 -- exec busybox-7b57f96db7-ztvwm -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-818700 -- exec busybox-7b57f96db7-ndqg5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-818700 -- exec busybox-7b57f96db7-ztvwm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (238.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-818700 -v=5 --alsologtostderr
E0908 12:21:38.394419   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:22:50.396458   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-818700 -v=5 --alsologtostderr: (3m23.2922287s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 status --alsologtostderr
E0908 12:25:15.312649   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 status --alsologtostderr: (35.3142012s)
--- PASS: TestMultiNode/serial/AddNode (238.61s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-818700 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (35.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (35.4597924s)
--- PASS: TestMultiNode/serial/ProfileList (35.46s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (355.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 status --output json --alsologtostderr: (35.200037s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 cp testdata\cp-test.txt multinode-818700:/home/docker/cp-test.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 cp testdata\cp-test.txt multinode-818700:/home/docker/cp-test.txt: (9.2600683s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700 "sudo cat /home/docker/cp-test.txt": (9.1559952s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 cp multinode-818700:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile256537690\001\cp-test_multinode-818700.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 cp multinode-818700:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile256537690\001\cp-test_multinode-818700.txt: (9.1780822s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700 "sudo cat /home/docker/cp-test.txt": (9.178412s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 cp multinode-818700:/home/docker/cp-test.txt multinode-818700-m02:/home/docker/cp-test_multinode-818700_multinode-818700-m02.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 cp multinode-818700:/home/docker/cp-test.txt multinode-818700-m02:/home/docker/cp-test_multinode-818700_multinode-818700-m02.txt: (16.2659003s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700 "sudo cat /home/docker/cp-test.txt": (9.3395718s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700-m02 "sudo cat /home/docker/cp-test_multinode-818700_multinode-818700-m02.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700-m02 "sudo cat /home/docker/cp-test_multinode-818700_multinode-818700-m02.txt": (9.5200444s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 cp multinode-818700:/home/docker/cp-test.txt multinode-818700-m03:/home/docker/cp-test_multinode-818700_multinode-818700-m03.txt
E0908 12:27:50.399488   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 cp multinode-818700:/home/docker/cp-test.txt multinode-818700-m03:/home/docker/cp-test_multinode-818700_multinode-818700-m03.txt: (16.0338572s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700 "sudo cat /home/docker/cp-test.txt": (9.2520208s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700-m03 "sudo cat /home/docker/cp-test_multinode-818700_multinode-818700-m03.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700-m03 "sudo cat /home/docker/cp-test_multinode-818700_multinode-818700-m03.txt": (9.2595528s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 cp testdata\cp-test.txt multinode-818700-m02:/home/docker/cp-test.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 cp testdata\cp-test.txt multinode-818700-m02:/home/docker/cp-test.txt: (9.342191s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700-m02 "sudo cat /home/docker/cp-test.txt": (9.3661055s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 cp multinode-818700-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile256537690\001\cp-test_multinode-818700-m02.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 cp multinode-818700-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile256537690\001\cp-test_multinode-818700-m02.txt: (9.1912368s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700-m02 "sudo cat /home/docker/cp-test.txt": (9.2772259s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 cp multinode-818700-m02:/home/docker/cp-test.txt multinode-818700:/home/docker/cp-test_multinode-818700-m02_multinode-818700.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 cp multinode-818700-m02:/home/docker/cp-test.txt multinode-818700:/home/docker/cp-test_multinode-818700-m02_multinode-818700.txt: (16.1580151s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700-m02 "sudo cat /home/docker/cp-test.txt": (9.3093581s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700 "sudo cat /home/docker/cp-test_multinode-818700-m02_multinode-818700.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700 "sudo cat /home/docker/cp-test_multinode-818700-m02_multinode-818700.txt": (9.2944038s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 cp multinode-818700-m02:/home/docker/cp-test.txt multinode-818700-m03:/home/docker/cp-test_multinode-818700-m02_multinode-818700-m03.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 cp multinode-818700-m02:/home/docker/cp-test.txt multinode-818700-m03:/home/docker/cp-test_multinode-818700-m02_multinode-818700-m03.txt: (16.2165219s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700-m02 "sudo cat /home/docker/cp-test.txt": (9.2577552s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700-m03 "sudo cat /home/docker/cp-test_multinode-818700-m02_multinode-818700-m03.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700-m03 "sudo cat /home/docker/cp-test_multinode-818700-m02_multinode-818700-m03.txt": (9.2623021s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 cp testdata\cp-test.txt multinode-818700-m03:/home/docker/cp-test.txt
E0908 12:30:15.315945   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 cp testdata\cp-test.txt multinode-818700-m03:/home/docker/cp-test.txt: (9.6744635s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700-m03 "sudo cat /home/docker/cp-test.txt": (9.3498638s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 cp multinode-818700-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile256537690\001\cp-test_multinode-818700-m03.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 cp multinode-818700-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile256537690\001\cp-test_multinode-818700-m03.txt: (9.3999131s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700-m03 "sudo cat /home/docker/cp-test.txt": (9.2856996s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 cp multinode-818700-m03:/home/docker/cp-test.txt multinode-818700:/home/docker/cp-test_multinode-818700-m03_multinode-818700.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 cp multinode-818700-m03:/home/docker/cp-test.txt multinode-818700:/home/docker/cp-test_multinode-818700-m03_multinode-818700.txt: (16.1544468s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700-m03 "sudo cat /home/docker/cp-test.txt": (9.2997772s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700 "sudo cat /home/docker/cp-test_multinode-818700-m03_multinode-818700.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700 "sudo cat /home/docker/cp-test_multinode-818700-m03_multinode-818700.txt": (9.3087466s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 cp multinode-818700-m03:/home/docker/cp-test.txt multinode-818700-m02:/home/docker/cp-test_multinode-818700-m03_multinode-818700-m02.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 cp multinode-818700-m03:/home/docker/cp-test.txt multinode-818700-m02:/home/docker/cp-test_multinode-818700-m03_multinode-818700-m02.txt: (16.1800343s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700-m03 "sudo cat /home/docker/cp-test.txt": (9.4121469s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700-m02 "sudo cat /home/docker/cp-test_multinode-818700-m03_multinode-818700-m02.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 ssh -n multinode-818700-m02 "sudo cat /home/docker/cp-test_multinode-818700-m03_multinode-818700-m02.txt": (9.2046006s)
--- PASS: TestMultiNode/serial/CopyFile (355.62s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (77s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 node stop m03: (25.6027151s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 status
E0908 12:32:33.494262   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-818700 status: exit status 7 (25.7668028s)

                                                
                                                
-- stdout --
	multinode-818700
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-818700-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-818700-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 status --alsologtostderr
E0908 12:32:50.403909   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-818700 status --alsologtostderr: exit status 7 (25.6257704s)

                                                
                                                
-- stdout --
	multinode-818700
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-818700-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-818700-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 12:32:46.241259    7740 out.go:360] Setting OutFile to fd 1492 ...
	I0908 12:32:46.306244    7740 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:32:46.306244    7740 out.go:374] Setting ErrFile to fd 1224...
	I0908 12:32:46.306244    7740 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:32:46.320272    7740 out.go:368] Setting JSON to false
	I0908 12:32:46.320272    7740 mustload.go:65] Loading cluster: multinode-818700
	I0908 12:32:46.320272    7740 notify.go:220] Checking for updates...
	I0908 12:32:46.320272    7740 config.go:182] Loaded profile config "multinode-818700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:32:46.321283    7740 status.go:174] checking status of multinode-818700 ...
	I0908 12:32:46.322241    7740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:32:48.414741    7740 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:32:48.414741    7740 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:32:48.414741    7740 status.go:371] multinode-818700 host status = "Running" (err=<nil>)
	I0908 12:32:48.414741    7740 host.go:66] Checking if "multinode-818700" exists ...
	I0908 12:32:48.415889    7740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:32:50.602052    7740 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:32:50.602187    7740 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:32:50.602299    7740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:32:53.130008    7740 main.go:141] libmachine: [stdout =====>] : 172.20.50.55
	
	I0908 12:32:53.130373    7740 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:32:53.130437    7740 host.go:66] Checking if "multinode-818700" exists ...
	I0908 12:32:53.144096    7740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 12:32:53.144096    7740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700 ).state
	I0908 12:32:55.247547    7740 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:32:55.247547    7740 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:32:55.247547    7740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700 ).networkadapters[0]).ipaddresses[0]
	I0908 12:32:57.838852    7740 main.go:141] libmachine: [stdout =====>] : 172.20.50.55
	
	I0908 12:32:57.838852    7740 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:32:57.839801    7740 sshutil.go:53] new ssh client: &{IP:172.20.50.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700\id_rsa Username:docker}
	I0908 12:32:57.956910    7740 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8122021s)
	I0908 12:32:57.971139    7740 ssh_runner.go:195] Run: systemctl --version
	I0908 12:32:57.992110    7740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 12:32:58.020986    7740 kubeconfig.go:125] found "multinode-818700" server: "https://172.20.50.55:8443"
	I0908 12:32:58.021068    7740 api_server.go:166] Checking apiserver status ...
	I0908 12:32:58.032164    7740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:32:58.080177    7740 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2614/cgroup
	W0908 12:32:58.104887    7740 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2614/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0908 12:32:58.115883    7740 ssh_runner.go:195] Run: ls
	I0908 12:32:58.123807    7740 api_server.go:253] Checking apiserver healthz at https://172.20.50.55:8443/healthz ...
	I0908 12:32:58.131099    7740 api_server.go:279] https://172.20.50.55:8443/healthz returned 200:
	ok
	I0908 12:32:58.131099    7740 status.go:463] multinode-818700 apiserver status = Running (err=<nil>)
	I0908 12:32:58.131099    7740 status.go:176] multinode-818700 status: &{Name:multinode-818700 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 12:32:58.131204    7740 status.go:174] checking status of multinode-818700-m02 ...
	I0908 12:32:58.131339    7740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:33:00.347918    7740 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:33:00.347950    7740 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:33:00.348139    7740 status.go:371] multinode-818700-m02 host status = "Running" (err=<nil>)
	I0908 12:33:00.348139    7740 host.go:66] Checking if "multinode-818700-m02" exists ...
	I0908 12:33:00.348978    7740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:33:02.460322    7740 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:33:02.460322    7740 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:33:02.460322    7740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:33:04.956886    7740 main.go:141] libmachine: [stdout =====>] : 172.20.62.186
	
	I0908 12:33:04.956886    7740 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:33:04.956886    7740 host.go:66] Checking if "multinode-818700-m02" exists ...
	I0908 12:33:04.973729    7740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 12:33:04.973729    7740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m02 ).state
	I0908 12:33:07.022297    7740 main.go:141] libmachine: [stdout =====>] : Running
	
	I0908 12:33:07.023044    7740 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:33:07.023120    7740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-818700-m02 ).networkadapters[0]).ipaddresses[0]
	I0908 12:33:09.488509    7740 main.go:141] libmachine: [stdout =====>] : 172.20.62.186
	
	I0908 12:33:09.488509    7740 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:33:09.489515    7740 sshutil.go:53] new ssh client: &{IP:172.20.62.186 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-818700-m02\id_rsa Username:docker}
	I0908 12:33:09.597429    7740 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.6236404s)
	I0908 12:33:09.610730    7740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 12:33:09.642141    7740 status.go:176] multinode-818700-m02 status: &{Name:multinode-818700-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0908 12:33:09.642141    7740 status.go:174] checking status of multinode-818700-m03 ...
	I0908 12:33:09.643274    7740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-818700-m03 ).state
	I0908 12:33:11.716978    7740 main.go:141] libmachine: [stdout =====>] : Off
	
	I0908 12:33:11.716978    7740 main.go:141] libmachine: [stderr =====>] : 
	I0908 12:33:11.716978    7740 status.go:371] multinode-818700-m03 host status = "Stopped" (err=<nil>)
	I0908 12:33:11.716978    7740 status.go:384] host is not running, skipping remaining checks
	I0908 12:33:11.716978    7740 status.go:176] multinode-818700-m03 status: &{Name:multinode-818700-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (77.00s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (189.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 node start m03 -v=5 --alsologtostderr
E0908 12:35:15.319601   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 node start m03 -v=5 --alsologtostderr: (2m33.4910887s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-818700 status -v=5 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-818700 status -v=5 --alsologtostderr: (35.6489055s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (189.34s)

                                                
                                    
x
+
TestPreload (484.32s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-335300 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.32.0
E0908 12:47:50.415114   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:49:13.509434   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-335300 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.32.0: (4m5.7778324s)
preload_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-335300 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-335300 image pull gcr.io/k8s-minikube/busybox: (8.8460607s)
preload_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-335300
E0908 12:50:15.330759   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-335300: (34.8569997s)
preload_test.go:65: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-335300 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0908 12:52:50.423834   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-335300 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m25.4917096s)
preload_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-335300 image list
preload_test.go:70: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-335300 image list: (7.1392997s)
helpers_test.go:175: Cleaning up "test-preload-335300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-335300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-335300: (42.2001773s)
--- PASS: TestPreload (484.32s)

                                                
                                    
x
+
TestScheduledStopWindows (323.88s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-121700 --memory=3072 --driver=hyperv
E0908 12:54:58.425203   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:55:15.335000   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-121700 --memory=3072 --driver=hyperv: (3m12.2216548s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-121700 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-121700 --schedule 5m: (10.4281088s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-121700 -n scheduled-stop-121700
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-121700 -n scheduled-stop-121700: exit status 1 (10.0117512s)
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-121700 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-121700 -- sudo systemctl show minikube-scheduled-stop --no-page: (9.3643767s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-121700 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-121700 --schedule 5s: (10.7647122s)
E0908 12:57:50.423022   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-121700
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-121700: exit status 7 (2.3622664s)

                                                
                                                
-- stdout --
	scheduled-stop-121700
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-121700 -n scheduled-stop-121700
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-121700 -n scheduled-stop-121700: exit status 7 (2.3657058s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-121700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-121700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-121700: (26.3605492s)
--- PASS: TestScheduledStopWindows (323.88s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (1042.82s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.32.0.985138657.exe start -p running-upgrade-208500 --memory=3072 --vm-driver=hyperv
E0908 13:00:15.338710   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.32.0.985138657.exe start -p running-upgrade-208500 --memory=3072 --vm-driver=hyperv: (8m12.045437s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-208500 --memory=3072 --alsologtostderr -v=1 --driver=hyperv
E0908 13:07:50.430074   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-020700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-208500 --memory=3072 --alsologtostderr -v=1 --driver=hyperv: (8m25.2778779s)
helpers_test.go:175: Cleaning up "running-upgrade-208500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-208500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-208500: (44.2765627s)
--- PASS: TestRunningBinaryUpgrade (1042.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-208500 --no-kubernetes --kubernetes-version=v1.28.0 --driver=hyperv
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-208500 --no-kubernetes --kubernetes-version=v1.28.0 --driver=hyperv: exit status 14 (391.5984ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-208500] minikube v1.36.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.39s)

                                                
                                    
x
+
TestPause/serial/Start (409.91s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-955700 --memory=3072 --install-addons=false --wait=all --driver=hyperv
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-955700 --memory=3072 --install-addons=false --wait=all --driver=hyperv: (6m49.912256s)
--- PASS: TestPause/serial/Start (409.91s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (413.23s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-955700 --alsologtostderr -v=1 --driver=hyperv
E0908 13:10:15.345988   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:11:38.439978   11628 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-264100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-955700 --alsologtostderr -v=1 --driver=hyperv: (6m53.2087014s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (413.23s)

                                                
                                    
x
+
TestPause/serial/Pause (8.44s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-955700 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-955700 --alsologtostderr -v=5: (8.4388043s)
--- PASS: TestPause/serial/Pause (8.44s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (12.7s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-955700 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-955700 --output=json --layout=cluster: exit status 2 (12.6949808s)

                                                
                                                
-- stdout --
	{"Name":"pause-955700","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-955700","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (12.70s)

                                                
                                    
x
+
TestPause/serial/Unpause (8.38s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-955700 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-955700 --alsologtostderr -v=5: (8.384111s)
--- PASS: TestPause/serial/Unpause (8.38s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (9.03s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-955700 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-955700 --alsologtostderr -v=5: (9.0272078s)
--- PASS: TestPause/serial/PauseAgain (9.03s)

                                                
                                    

Test skip (33/208)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-264100 --alsologtostderr -v=1]
functional_test.go:931: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-264100 --alsologtostderr -v=1] ...
helpers_test.go:519: unable to terminate pid 9684: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-264100 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:989: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-264100 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0255034s)

                                                
                                                
-- stdout --
	* [functional-264100] minikube v1.36.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 11:06:24.640546   10084 out.go:360] Setting OutFile to fd 1416 ...
	I0908 11:06:24.757885   10084 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:06:24.757885   10084 out.go:374] Setting ErrFile to fd 1508...
	I0908 11:06:24.757885   10084 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:06:24.787285   10084 out.go:368] Setting JSON to false
	I0908 11:06:24.792716   10084 start.go:130] hostinfo: {"hostname":"minikube6","uptime":298436,"bootTime":1757031148,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6282 Build 19045.6282","kernelVersion":"10.0.19045.6282 Build 19045.6282","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0908 11:06:24.792716   10084 start.go:138] gopshost.Virtualization returned error: not implemented yet
	I0908 11:06:24.800037   10084 out.go:179] * [functional-264100] minikube v1.36.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
	I0908 11:06:24.815401   10084 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0908 11:06:24.818545   10084 notify.go:220] Checking for updates...
	I0908 11:06:24.821542   10084 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 11:06:24.826393   10084 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0908 11:06:24.829556   10084 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 11:06:24.832280   10084 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 11:06:24.837665   10084 config.go:182] Loaded profile config "functional-264100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 11:06:24.838761   10084 driver.go:421] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:995: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-264100 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-264100 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0167966s)

                                                
                                                
-- stdout --
	* [functional-264100] minikube v1.36.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 11:06:19.595704    8440 out.go:360] Setting OutFile to fd 1108 ...
	I0908 11:06:19.684381    8440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:06:19.684381    8440 out.go:374] Setting ErrFile to fd 824...
	I0908 11:06:19.684381    8440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:06:19.709351    8440 out.go:368] Setting JSON to false
	I0908 11:06:19.713937    8440 start.go:130] hostinfo: {"hostname":"minikube6","uptime":298431,"bootTime":1757031148,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6282 Build 19045.6282","kernelVersion":"10.0.19045.6282 Build 19045.6282","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0908 11:06:19.714171    8440 start.go:138] gopshost.Virtualization returned error: not implemented yet
	I0908 11:06:19.719771    8440 out.go:179] * [functional-264100] minikube v1.36.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
	I0908 11:06:19.723869    8440 notify.go:220] Checking for updates...
	I0908 11:06:19.727482    8440 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0908 11:06:19.732481    8440 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 11:06:19.735479    8440 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0908 11:06:19.738474    8440 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 11:06:19.741477    8440 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 11:06:19.745475    8440 config.go:182] Loaded profile config "functional-264100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 11:06:19.746476    8440 driver.go:421] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1040: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard